The study is based on having LLMs decide to amplify one of the top ten posts on their timeline or share a news headline. LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
The behavioral options are restricted to posting news headlines, reposting news headlines, or being passive. There’s no option to create original content, and no interventions centered on discouraging reposting. Facebook has experimented[0] with limits to reposting and found such limits discouraged the spread of divisive content and misinformation.
I mostly use social media to share pictures of birds[1]. This contributes to some of the problems the source article[2] discusses. It causes fragmentation; people who don’t like bird photos won’t follow me. It leads to disparity of influence; I think I have more followers than the average Mastodon account. I sometimes even amplify conflict[3].
LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
> > There’s no option to create original content...
> While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
Ok, but, this is by design. Other forms of social media, places like Mastodon etc have a far, far higher rate of people creating original content.
I'm not sure the experiment can be done other than to try interventions on real users of a public social media service as Facebook did in the article I linked. Of course people running those services usually don't have the incentives to test harm reduction strategies and certainly don't want to publicize the results.
> the vast majority of users don't create original content
That's true now at least most of the time, but I think it's as much because of design and algorithmic decisions by the platforms to emphasize other types of content. Early Facebook in particular was mostly original content shared between people who knew each other. The biggest problem with that was it wasn't very profitable.
U.S. adults commonly engage with popular social media platforms but are more inclined to browse content on those websites and apps than to post their own content to them. The vast majority of those who say they use these platforms have accounts with them, but less than half who have accounts -- and even smaller proportions of all U.S. adults -- post their own content.
The analysis also reveals another familiar pattern on social media: that a relatively small share of highly active users produce the vast majority of content.
That's junk science and doesn't refute the specific point I made. Facebook users are far more likely to post original content than X users. It might just be some blurry backlit vacation photos but it is original content.
Social media as a concept can definitely be fixed. Just stop doing algorithms, period.
Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Social media is meant to be a place where you get updates about the lives of people you follow. You would visit several times a day, read all new updates, maybe post your own, and that's it. The rest of the time, you would do something else.
> Stop pretending that people want to use social media for entertainment and news and celebrities
People actually want media, social and otherwise, for exactly that.
> Stop forcing content from outside of my network upon me.
There are social media apps and platforms that don't do that. They are persistently less popular. People, by and large, do want passive discovery from outside of their network, just like they do, in aggregate, want entertainment and news and celebrities.
> Make the chronological feed the only option.
Chronological by what? Original post? Most recent edit? Most recent response? Most recent reaction?
it will fix very little. The "problems" of social media are rooted in selfish human behavior, it's like a giant high-school. You can't "fix" that because it's ingrained in humans
You know the "retweet" feature on Twitter didn't originally exist? Before the feature was implemented, people would just write "RT" followed by the author username, then past in the text of the tweet they wanted to retweet.
There's nothing wrong with reposts that are made knowingly by people you follow. My issue is with current dominant social media platforms all focusing on forcing people to see content from outside of their network that they would've otherwise never seen, because neither them, nor the people they follow, would follow anything like that.
> Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Congrats, you now have platforms no one will care about, as attention span gets sniped by competitors who want to maximize engagement and don't care about your arbitrary rules (ie, literally what happened 15 years ago).
I would care, and I imagine there are others who would too. I don’t use social media anymore (at all!) because of this. If I could have the chronological feed restored and no intrusion of other content I’d redownload immediately. There must be a market for this.
There's plenty of options, take your pick. The latest one that I've been hearing about is https://retro.app, there are others.
The issue of course is that your friends won't be on it, most of them won't sign up even if you beg them, and most likely none of you will be using the service anymore 6 months from now.
There might be demand, but this "platform A" will be in competition with a dopamine-focused engagement "platform B" which also supports to host updates from "the lives of people you follow".
The majority of people will then have both installed but spend more time on "platform B" as it is actively engaging them.
Platform A will again end up in competition for user-attention with Platform B, as it needs money to operate their business etc.
Now if Platform A asks for a subscription fee to fund their non-engagement social media platform, how many of these users described above will pay and not simply prefer the free "platform B"?
How will such a churn affect users willing to PAY for "Platform A", if people whose "life they want to follow" have completely moved to "Platform B"?
Funny enough, as a European I could use WhatsApp as this "Platform A", as it has features to share status-updates, pictures etc. as part of your profile. Everyone I know has WhatsApp, noone is using those features.
So in essence, this Platform A already exists in Europe but doesn't work as "social media" because people don't engage with it...
> The majority of people will then have both installed but spend more time on "platform B" as it is actively engaging them.
And why would that be a problem? Most people also spend more time sleeping than using social media, so what? Let them be. Give them a tool that they would decide how to use best to suit their lifestyle.
> Platform A will again end up in competition for user-attention with Platform B, as it needs money to operate their business etc.
It would not, because it would not be run by a commercial organization. At this point I'm convinced that it's impossible for a sane social media platform to exist unless it's decentralized and/or run by a nonprofit. As soon as one touches venture capital, enshittification becomes only a matter of time.
I'm not talking about literal television, but more about something where you don't really get to decide what you see, you go there to get entertained with whatever.
for people who dont know what theyre talking about, sure. but pedophiles and addictive videos are 2 completely different things and it would help if you defined which youre referring to
Why can't it be fixed? Just remove algorithms and show only subscribed content in chronological order. That's how most of the early platforms worked and it was fine.
Probably because there's no monetary incentive for that, so "can't". It would mean the big social media companies collapsing, because their entire raison d'etre at this point is mass-manipulation.
why do you treat it like absolute voodoo? its a website that shows you videos, with the same algorithm theyve been using for 20 years. its now only a problem since basically iOS came out and now every single clueless non technical person is on the internet and discovering decade old memes for the first time.
I think it really is that simple. Have a discovery channel, recommendations side bar, just stop trying to add "shareholder value" through flawed machine learning attempts. Maintain a useful piece of software, is it too much to ask an earnings-driven corp? Probably.
Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Think of it this way: you’re hosting a party, and an uninvited stranger kicks the door open, then starts criticizing how you make your bed. That’s about what it feels like to try to “fix” social media.
I really liked the Circles feature in Google+: you defined groups of friends, and you could make your posts visible to particular groups.
They were not like group chats or subreddits, the circles were just for you, it was just an easy way to determine which of your followers would see one of your posts.
This kind of interaction was common in early Facebook and Twitter too, where only your friends or followers saw what you posted, often just whitelisted ones. It was not all public all the time. Google+ just made that a bit more granular.
I suppose that these dynamics have been overtaken by messaging apps, but it's not really the same thing. It's too direct, too real-time and all messages mixed-in, I like the more async and distributed nature of posts with comments.
Granted, if you really want a diverse discussion and to talk with everyone in the world at once, indeed that's a different problem and probably fundamentally impossible to make non-toxic, people are people.
Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
> Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
We'd just go back to human curation, you'd whitelist a few curators you liked, people wanting to promote their content would email a link to a curator, if they thought their audience would like it they'd share it, you'd see it via your whitelist and if you liked the look of it you'd whitelist them.
So you are fighting against the platform that you're using. It reminds me about people constantly fighting with their own computer (Windows) to remove ads and crap. In both cases viable alternatives exist which don't require this huge effort.
Do you mind sharing this extension? I would prefer if it also shows the retweets of people you follow as that is an endorsement no matter what people say.
> Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Worth reading Jaron Lanier's Ten Arguments for Deleting Your Social Media Accounts Right Now book:
"Diverse discussion" is just something I don't want. Of course I've made up my mind about all kinds of things and I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before. Sure, tell me again why your religion or your conspiracy theory proves that the scientific consensus is a hoax. Maybe you'll convince me this time?
I don't mind Mastodon, but I'm pretty selective in who I follow, and diversity of opinions isn't one of my criteria.
> I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before
I mean, that's fine, if you think that you can consider all conceivable angles thoroughly, by yourself. I for one welcome opposing views, but I suppose if my idea of that meant "religion or conspiracy theories" I'd probably be avoiding it too.
I follow people I can learn from, not people who try to convince me that everything I already know is wrong. I don't follow people who post misinformation, reject science, or who think that ad hominem attacks are a valid form of debate. There are a lot of them out there!
I still think old school linear forums are the best format for online discussion. They’re not perfect by any means, but I think they still beat all the alternatives I’ve tried.
The old school forums also centered around a single topic or interest, which I think helped keep things focused and more civil. Part of the problem with social media is that it wants to be everything for everyone.
The internet has become a primary battlefield for making money, and we can't go back to the days when it was just a non-commercial hobby that people enjoyed. To make money online, it's crucial to spread content as widely as possible, and the most effective methods for this are clickbait and ragebait. That's why the enshittification of the internet was inevitable.
This seems somewhat disproven by the existence of places like this? Strict moderation really does work wonders to prevent some of the worst behaviors.
Not that you won't have problems, even here, from time to time. But it is hard to argue that things aren't kept much more civil than in other spots?
And, in general, avoiding direct capital incentives to drive any questionable behavior seems a pretty safe route?
I would think this would be a lot like public parks and such. Disallow some commercial behaviors and actually enforce rules, and you can keep some pretty nice places?
I generally agree that strict moderation is the key but there's obviously a certain threshold of users and activity that is hit where this becomes unfeasible - ycombinator user activity is next to nothing compared to sites like Facebook/twitter/reddit. Even on Reddit, you see smaller subreddits able to achieve this.
But just like a public park, if 2 million people rock up it's going to be next to impossible to police effectively.
I'm skeptical of proving stuff about new-social-media with LLMs, because LLMs themselves are [presumably] trained on quite a bit of existing-social-media text.
A lot of talk goes into how Facebook or other social media use algorithms to encourage engagement, that often includes outrage type content, fake news, rabbit holes and so on.
But here's the thing ... people CHOOSE to engage with that, and users even produce that content for social media platforms for free.
It's hard to escape that part.
I remember trying Bluesky and while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different. Outlandish short posts, same lame jokes / pithy appeals to our emotions, and so on. People on there want to behave the same way they wanted to on Twitter.
In the same was a smoker "chooses" to engage with cigarettes. Let's not underestimate the fact that core human programming is being exploited to enable such behavior. Similar to telling a smoker to "just out the cigsreet down", we can't just suddenly tell people in social media to "stop being angry".
>people on [BlueSky] want to behave the same way they wanted to on Twitter.
Yes. Changing established habits is even harder to address. You can't make a horse drink (I'm sure anyone who ever had to deal with a disengaged captive audience feels this in their souls). Whike it's become many peoples primary "news source", aka the bread, most people came there for the circus.
I don't really have an answer here. Society needs to understand social media addiction the same way they understand sugar addictions; have it slammed in there that it's not healthy and to use sparingly. That's not something you can fix with laws and regulation. Not something you fix in even a decade.
> But here's the thing ... people CHOOSE to engage
Kinda, but they also don't really realise that they have much more control over the feed than they expect (in certain areas)
For the reel/tiktok/foryou-instagram feeds, it shows you subjects that you engage with. It will a/b other subjects that similar people engage with. Thats all its doing. continual a/b to see if you like what ever flavour of bullshit is popular.
Most people don't realise that you can banish posts from your feeds by doing a long press "I don't like this" equivalent. It takes a few times for the machine to work out if its an account, groups of accounts of theme that you don't like, and it'll stop showing it to you. (threads for example took a very long time to stop showing me fucking sports.)
Why don't more people know this? because it hurts short term metrics for what ever bollocks the devs are working on. so its not that well advertised. just think how unsuccessful the experiments in the facebook app would have been if you were able to block the "other posts we think you might like" experiments. How sad Zuckerberg would be that his assertion was actually bollocks?
There's definitely a mass of people who can't/won't/don't get past passive/least-effort relationships with things on screens. These would be the type that in the TV days would simply leave the TV on a specific channel all day and just watch whatever was on, and probably haven't changed their car radio dial from the station they set it to when they bought the car. In modern times they probably have their cable TV they still pay for on a 24 hour news channel and simply have that going all day.
To be fair, in times far past, you really didn't have much choice in TV or radio channels, and I suspect it's this demographic that tend to just scroll down Facebook and take what it gives without much thought other than pressing Like on stuff.
Yup. Knowing the exact percentage of those people would be hurtful to my soul I think, but I suspect they drive a meaningful percentage of business. Like that time when Netflix displayed shows on, because some people couldn't be bothered to actually choose something to watch ?
Maybe the TikTok algorithm is better, but the "I don't like this" action on Meta properties just blatantly does not work. I still get the same type of clickbait content no matter how many times I try to get rid of it. Maybe watching other types of Reels would do it, but no thanks.
Transparency would prove or disprove this. Release the algorithm and let us decide for ourselves. In my experience, Instagram made an algorithm change 3-4 years ago. It used to be that my feed was exactly my interests. Then overnight my feed changed. It became a mix of 1. interracial relationship success stories 2. scantily clad women clickbait, 3. east asian "craft project" clickbait, and just general clickbait. It felt as if "here's what other people like you are clicking on" became part of the algorithm.
brains are wired that way. Gossip and rage bait is not something that people actively decide for, it's subconscious. It's weird saying that this is the problem of individuals - propaganda is effective not because people are choosing to believe it.
Right. When we're talking about the scale of humanity itself, we've moved far past individual actions.
At the scale we're operating, if only 1% is susceptible to these algorithms, that's enough to translate to noticeable social issues and second-order effects.
What gets me about some platform is all the text-in-images and video with senseless motion. I've been dipping my toes into just about any social where I could possibly promote my photography and the worst of them all is Instagram where all the senseless motion drives me crazy.
Current social media have basically found the "bliss point" of online engagement to generate revenue and keep the eyes attached. These companies found a way to keep people hooked, and strong emotions seem to be a major tool.
It really isn't a choice. It is very accessible. Many friends are on social networks and you slowly get sucked into shorts. Then, it becomes an addiction as your brain crave the dopamine hits.
Personally I really enjoy Mastodon and Bluesky but I am very deliberate at avoiding negative people, I do not follow and often mute or block “diss abled” people who complain about everything or people who think I make their life awful because I am cisgender or who post 10 articles an hour about political outrage. The discover page on Bluesky is algorithmic and respects the “less like this” button and last time I looked has 75% less outrage than the following page. (A dislike button that works is a human right in social media!)
Once I get my database library reworked, a project I have in the queue is a classifier which filters out negative people so I can speed follow and not add a bunch of negativity to my feed, this way I get to enjoy real gems like
FWIW, I've been consistently posting quality stuff on Bluesky for the last year, and despite having a few hundred followers, I get ZERO engagement.
People in the Bluesky subreddit tell me it's not a "post and ghost" platform in that you have to constantly interact with people if you want to earn engagement, but that's too time consuming.
In other words, the discovery algorithm(s) on BlueSky sucks.
Maybe it doesn't suck. Others are just better at posting discoverable content than you. (note: "discoverable" =/= "engaging")
If we believe the discoverability algorithms to avoid "engagement" is respected, who would be more discoverable? The person coming in to show off one high quality article every 6 months, or the person doing weekly blogs with some nuggets of information on the same topic?
Maybe your article goes viral, but odds are that the weekly blogger will amass more followers, have more comments, and will build up to a point where they 99% of the time get more buzz on their updates than the one hit wonder.
It's just Twitter 2. It's the same as Twitter, made by the same people who made Twitter, doing the same thing as Twitter in the same way as Twitter, with the same culture as Twitter, plus a fig leaf to decentralisation.
> while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different
I feel exactly the same way.
I think there needs to be a kind of paradigm shift into something different, probably something that people in general don't have a good schema for right now.
Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit? But there's always these chicken and egg issues with adoption, who are early adopters, how that affects adoption, genuine UX-type issues etc.
Sounds like a return to old school, long term forums. They still exist, but there's a reason Reddit and Twitter took over the "forum space". They toom the core ideas and injected it with "engagement". In this case, with the voting system of reddit, and the follower system of Twitter. Gamefying the act of interacting with peope had effects beyond anyone's comprehension in 2007
> Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit?
So, Usenet? The medium is the message and all that, sure, but unless you change where the message originates you are ultimately going to still end up in the same place.
What do they mean "fixed"? Wasn't social media , from day one, about gossip , self-promotion and gaslighting? they excel at that so one would say they serve their purpose.
It's very misguided to pretend that social media mobs would replace "the press". There is a reason the press exists in the first place, to inform critically , instead of listening to hearsay.
>Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?
they really pomp up what is effectively a message board (facebook, twitter) or a video website with a comment/message feature (youtube, tiktok) or an instant messenger with groups (whatsapp). NONE OF THIS IS NEW.
I'd like to see more software that amplifies local social interactions.
There are apps like Meetup, but a lot of people just find it too awkward. Introverts especially do not want to meet just for the sake of meeting people, so they fallback on social media.
Maybe this situation is fundamentally not helped by software. All of my best friendships organically formed in real-world settings like school, work, neighborhood, etc.
Indeed, you're describing the lack of a 3rd place. These days, maybe even the lack of a 2nd place as you graduate school and work is now fully remote. Without that societal push towards being in a public spot, many people will simply withdraw to themselves.
A third place would fix this, especially for men who need "things". You go to a bar for "thing" and if you meet some others to yell at sports with, bonus. We have less "things" for gen Z, and those things happen rather infrequently in my experience. I'm not sure if a monthly Meetup is quite enough to form strong bonds.
My tech (I was founding engineer and CTO) company took over a co-working space and expanded it, we ran that portion of it at break even.
We intentionally set out to create a social club/co-working space. A lot goes into it. I'm a non-theist who comes from a multi generational group of theist church planters (like 100s of churches, just over and over), it's a multi factorial process with distinct transitions in space-time and community size, where each transition has to be handled so you don't alienate your communities founding members (who will be very different from later members) and still are able to grow.
People don't do it because they can't see the value while they are in the early mess of it. You have to like people to pull it off, you have to NOT be a high control person who can operate at high control at certain developmental stages. You have to have a moral compass everyone understands and you are consistent with, tech people like 0 trust. You have to create a maximum trust environment which means NOT extracting value from the community but understanding that the value is intrinsic in the community.
You have to design a space to facilitate work and play. It's not hard but you have to get everything right, community can't have mono culture and it must be enjoyable/uncomfortable, and you must design things so people can choose their level of engagement and grow into the community. It's easier once it has enough intertia that they understand they are building a thing with real benefits.
Even things like the flow of foot traffic within the space, chokepoints narrowing, these kinds of thing all effect how people interact.
I've been wanting to setup something like a 3rd place that tries only to break even. I'm unfortunately not a very social person.
Because these 3rd spaces are open to anyone and probably bringing people in from internet commmunties. What do you do when someone comes along and they're not breaking any rules but its clear that no one likes them? I've seen it drive entire groups away but because the person has done nothing wrong I cant/dont want to just say "fuck off kid no likes your weird ass"
I woul love to hear more about this. I am in need of a 3rd place, but unfortunately the only meetups around are sports or churches here. What did the members in your group do after you shut down? Were you open to members offering donations to keep your 3rd place going?
This isn't a technology problem. Technology can help accessibility, but fundamentally this is an on-the-ground, social coordination problem.
Functioning, welcoming, and well-ran communities are the only thing that solves this. Unfortunately, technology often makes this worse, because it creates such a convenient alternative and also creates a paradox of choice. I.e. people think "when there's 1000 meetups to check out, and this one isn't perfect, I'll just move onto the next one" when actually it's the act of commitment that makes a community good.
I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
There are many ways AIs differ from real people and any conclusions you can draw from them are limited at best -- we've had enough bad experiments done with real people
Appalling. The entire question of "fixing social media", for any definition of "fixing", involves not just the initial reaction to some change but the second-and-greater-order effects. LLMs are point-in-time models and intrinsically can not be used for even guessing at second-order effects of a policy over time. This shouldn't have gotten past the proposal phase.
The worst problems with people these days seem to be they don’t pick up the phone. Probability-based polls are still pretty good about most things unless they involve Donald Trump —- it seems some Trump supporters either don’t pick up the phone or lie to pollsters. Some polls correct for this with aggressive weighting but how scientific it really is is up in the air.
Yeah. Sadly our phones for any unidentified number fell to spam. These days, if the message is important then they can leave a voicemail... And 90% of this voicemail reveal spam as well.
>unless they involve Donald Trump
A sense of shame perhaps. If you ask someone "how often do you brush your teeth" and compare it to more pragmatic testing, you see people have some sense of wanting to give the "right" answer. Even in a zero risk anonymous survey.
>I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
Genuine question : are you scared for your job ?
I see this tendency to use "synthetic personas" growing and frankly, having to explain why this sucks is insulting in itself. Decision makers are just not interested in having this kind of thought argument.
Yes and mostly No. No, because I work in games and I've seen enough people thinking that a "good game" just needs pretty graphics and a facsimile of "fun" to know that AI can't ever simulate this. Mkst Humans can't even seem to do it consistently, on all organizational levels.
But i have a footnote of "yes" because as you said, decision makers are just not interested in having this discussion about "focus on making fun games". So it will unfortunately affect my job in the short and even medium terms. Because so much of big money in games these days is in fact not focused on making a game, but on trying to either generate a gambling simulator, an engagement trap, or (you guessed it) AI hype. Both to try and claim you can just poof up assets, and to try and replace labor.
Knowing this, I do have long term plans to break out into my own indie route.
Not really. Sales is doing better than it ever has since I’ve been here. For one thing, AI folks want our data. Despite challenges in the industry, public opinion is more relevant than ever and the areas where we are really unsurpassed is (1) historical data and (2) the most usable web site, the latter one I am a part of.
It doesn’t surprise me if they found that the emergent behaviors didn’t change given their method. Modifying the simulation to make them behave differently would mean your rules have changed the model’s behavior to “jump tracks” into simulating a different sort of person who would generate different outputs. It’s not quite analogous to having the same Bob who likes fishing responding to different stimuli. Sort of like how Elon told Grok to be “unfathomably based” and stop caring about being PC” and suddenly it turned into a Neo-Nazi Chan-troll. Changing the inputs for an LLM isn’t taking a core identity and tweaking it, it’s completely altering the relationships between all the tokens it’s working with.
I would assume there is so much in the corpus based on behavior optimized for the actual existing social media we have that the behavior of the bots is not going to change because the bot isn’t responding to incentives like a person would it’s mimicking the behavior it’s been trained on and if there isn’t enough training data of behavior under the different inputs you’re trying to test you’re not actually applying the “treatment” you would think you are.
This is true as individuals, but importantly as a society we have far more agency than sometimes it feels like when you watch us all acting out our own individual self-destruction.
Banning CFCs, making seatbelts a legal requirement, making drink driving illegal, gun control (in countries outside the USA), regulations on school canteens. These are all examples of coordination where we've solved problems further upstream so that individuals don't have to fight against their own greedy, self-serving, short-sighted nature.
We do have the ability to fix this stuff, it's just messy.
I didn't mean to imply that, they definitely can be those things and far worse. But there are many examples of societal coordination that achieve the exact opposite of that (Scandinavian countries are of course the canonical example).
> Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects.
I think this is expected. Think back to newsgroups, email lists, web forums. They were pretty much all chronological or maybe had a simple scoring or upvoting mechanism. You still had outrage, flamewars, and the guy who always had to have the last word. Social media engagement algorithms probably do amplify that but the dysfunction was always part of it.
The only thing I've seen that works to reduce this is active moderation.
Social media is a few people selling the data of many people looking at content made by some people selling something.
There is also research and promotion of values going on and the thing as a whole is entertaining and can be rigged or filtered on various levels by all participants.
It’s kind of social. The general point system of karma or followers applies and people can have a career and feeling of accomplishment to look back on when they retire. The cosmic rule of anything. too much, no good applies.
It’s not really broken but this is the age of idiots and monsters, so all bets are off.
I'm honestly really tired of having to read through so much bloat in these types of articles. They can't just elaborate exactly on the thing of the title ? They have to spend paragraphs writing stories ?
Any interesting work on using LLMs to moderate posts/users? HN is often said to be different because of its moderation, couldn't you train an LLM moderator on similar rules to reduce trolls, ragebait, and low effort posts at scale?
A big problem I see is users in good faith are unable to hold back from replying to bad faith posts, a failure to follow the old "don't feed the trolls rule".
> Ars Technica: I'm skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust.
You can't accuse them of hiding their bias and contradictions.
How can a single paper using a unproven (for this type of research) tech disprove such (alleged) skepticism.
People bending over backwards to do propaganda to harvest clicks.
point-to-point communication between every human on Earth to every other human on Earth flattens communication hierarchies that used to amplify expertise and a lot of other behaviors. We created new hierarchies, but they are mostly demagogues pandering to the middle. Direct delegation is sort of like trying to process an image without convolution. Nobody knows what anyone else thinks, so we just trust that one neuron.
> They then tested six different intervention strategies...
None of these approaches offer what I want, and what I think a lot of people want, which is a social network primarily of people you know and give at least one shit about. But in reality, most of us don't have extended social networks that can provide enough content to consistently entertain us. So, even if we don't want 'outside' content (as if that was an option), we'll gravitate to it out of boredom and our feeds will gradually morph back into some version of the clusrterfucks we all deal with today.
I think that's an oversimplification. People have problems sure, but just like alcohol, social media can and does exacerbate them. The answer to dealing with the former is regulation. I'm not sure that is feasible for the latter.
Social media leveraging the billions spent on marketing over the years, the skills of knowledgeable experts in multiple disciplines, basically thousands of human years of experts at manipulating people against a random person with zero guard up that wants to chat with their friends/make new friends isn't a people problem.
Designing social media as a positive place was and continues to be a choice that no one is making. Because it's too damn profitable to make a hellhole / attention vacuum that people can't stop using.
> these platforms too often create filter bubbles or echo chambers.
I thought the latest research had debunked this and showed that the _real_ source of conflict with social media is that people are forced out of their natural echo-chambers and exposed to opinions that they normally wouldn't have to contend with?
> ...the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we're probably doomed...
No specific dynamics are named in the remainder of the article, so how are we supposed to know if they're "structurally embedded" in anything, let alone if we're doomed?
I'm reading Tim Urban's book titled "What's Our Problem".
It definitely explains the different types of thinking that I'm making up our current society, including social media. I haven't got to the part yet where he suggests what to do about it, but it's fascinating insight into our human behavior in this day and age.
I think this problem is partly due to greedly algos and party due to these sites being so large they have no site culture.
Site culture is what prevents mods from having to step in and sort out every little disagreement. Modern social media actively discourages site culture and post quality becomes a race to the bottom. Sure its harder to onboard new users when there are social rules that need to be learnt and followed but you retain users and have a more enjoyable experience when everyone follows a basic etiquette.
The main reason that it can't be fixed is that it has political or corporate operators and propaganda bots have taken over. There is always an agenda running through threads of social media even for mundane topics that seeking supremacy.
Social media can be fixed, its just the incentives are not aligned.
To make money, social media companies need people to stay on as long as possible. That means showing people sex, violence, rage and huge amounts of copyright infringements.
There is little advantage in creating real-world consequences for bad actors. Why? because it hurts growth.
There was a reason why the old TV networks didn't let any old twat with a camera broadcast stuff on their network, why? because they would get huge fines if they broke decency "laws" (yes america had/has censorship, hence why the simpsons say "whoopee" and "snuggle")
There are few things that can cause company ending fines for social media companies. Which means we get almost no moderation.
The US does not have a working legislature. It hasn't for possibly ~15 years.
But, if you think how closely network TV was regulated, by a government regulator, despite the power that those networks wielded (and against the incumbent radio and newspapers) We know it has happened.
The issue is that government in the USA has been dysfunctional for >30 years, not that regulation is ineffective.
Widespread adoption before understanding risks - embraced globally before fully grasping the mental health, social, and political consequences, especially for young people.
Delayed but significant harm - can lead to gradual impacts like reduced attention span, increased anxiety, depression, loneliness, and polarization
Corporate incentives misaligned with public health - media companies design platforms for maximum engagement, leveraging psychological triggers while downplaying or disputing the extent of harm
- Smoking feels good but doesn't provide any useful function.
- Some social media use feels good and doesn't provide any useful function, but social media is extremely useful to cheaply keep in touch with friends and family and extremely useful for discovering and coordinating events.
Fortunately the "keep in touch" part can be done with apps that don't have so much of the "social media" part, like Telegram, Discord, and even Facebook Messenger versus the main app.
I think most of the social media power users don't connect with friends and family at all through the platforms. Young Gen Zers just scroll Tiktok (or whatever clone they prefer) and share the ones they like through snapchat/discord/telegram/messenger/sms/whatsapp. Some will post stuff for their friends to see through "close friends" or whatever, but it's much less personal than it once was with Facebook groups and whatnot
Agreed. And it's not necessary when you have so many apps. They're using Tiktok for scrolling and Discord when they actually want to chat with their friends.
This analogy undersells the negative impact of social media. Smoking wasn't a propaganda machine at the hands of a few faceless corpos with no clear affiliation, for example, nor did it form a global spynet
Social media in a profit-seeking system can't be fixed. Profit-seeking provides the evolutionary pressure to turn it into something truly destructive to users. The only way it can work is via ownership by a benevolent non-profit. However, that would likely eventually give in to corruption if given enough time. Outlawing it completely, as well as regulating the algorithmic shaping of the online experience, is probably the inevitable future. Unfortunately, it won't come until the current system causes a complete societal facture and collapse.
If enough users are destroyed, advertisers (social media's real customers) won't have sufficient markets for their products, and profits will fall. Social media can't destroy its users and survive.
Seriously though, I disagree. Social media in a profit-seeking system can work if the users are the ones who pay. The easiest way for this to work-now that net neutrality is no longer a thing-is bundling through user's phone bills. If Facebook et al. were bundled similarly to how Netflix, Hulu and other streaming apps are now packaged with phone plan deals, then the users would be the focus, not the advertisers. This might require that social media be legislatively required to offer true ad-free options, though.
I think you're on the right track, but not getting to what I view as the logical conclusion: publicly funded options, free at the point of service to everyone. I've also humored the idea of taking it one level of abstraction further: a publicly funded cloud computing infrastructure, access to which is free (up to a level of usage). People could then choose to use these cloud computing resources to host, say, federated instances of open social networks.
I mean, it will never happen, but I think it's a path that resolves a lot of problems, and therefore a fun thought experiment.
Do all of these points apply to the traditional media funhouse mirror that we love to hate, too?
> "The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve."
I see illegal war, killing without due process, and kleptocracy. It's partly the media's fault. It's partly the peoples' fault for depending on advertising to subsidize free services, for gawking, for sharing without consideration, for voting in ignorance.
Social media reflects the people; who can't be "fixed" either.
If you're annoyed with all of these people on here who are lesser than and more annoying than you, then stop spending so much time at the bar.
The study is based on having LLMs decide to amplify one of the top ten posts on their timeline or share a news headline. LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
The behavioral options are restricted to posting news headlines, reposting news headlines, or being passive. There’s no option to create original content, and no interventions centered on discouraging reposting. Facebook has experimented[0] with limits to reposting and found such limits discouraged the spread of divisive content and misinformation.
I mostly use social media to share pictures of birds[1]. This contributes to some of the problems the source article[2] discusses. It causes fragmentation; people who don’t like bird photos won’t follow me. It leads to disparity of influence; I think I have more followers than the average Mastodon account. I sometimes even amplify conflict[3].
[0] https://www.socialmediatoday.com/news/internal-research-from...
[1] https://social.goodanser.com/@zaktakespictures/
[2] https://arxiv.org/html/2508.03385v1#S3
[3] https://social.goodanser.com/@zaktakespictures/1139481946021...
LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
> > There’s no option to create original content...
> While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
Ok, but, this is by design. Other forms of social media, places like Mastodon etc have a far, far higher rate of people creating original content.
I'm not sure the experiment can be done other than to try interventions on real users of a public social media service as Facebook did in the article I linked. Of course people running those services usually don't have the incentives to test harm reduction strategies and certainly don't want to publicize the results.
> the vast majority of users don't create original content
That's true now at least most of the time, but I think it's as much because of design and algorithmic decisions by the platforms to emphasize other types of content. Early Facebook in particular was mostly original content shared between people who knew each other. The biggest problem with that was it wasn't very profitable.
Nonsense. The vast majority of my Facebook friends post at least some original content.
Fortunately we don't have to rely on your anecdata, people actually study this stuff:
https://news.gallup.com/poll/467792/social-media-users-incli...
U.S. adults commonly engage with popular social media platforms but are more inclined to browse content on those websites and apps than to post their own content to them. The vast majority of those who say they use these platforms have accounts with them, but less than half who have accounts -- and even smaller proportions of all U.S. adults -- post their own content.
https://www.pewresearch.org/internet/2019/04/24/sizing-up-tw...
Most users rarely tweet, but the most prolific 10% create 80% of tweets from adult U.S. users
https://www.pewresearch.org/internet/2021/11/15/the-behavior...
The analysis also reveals another familiar pattern on social media: that a relatively small share of highly active users produce the vast majority of content.
That's junk science and doesn't refute the specific point I made. Facebook users are far more likely to post original content than X users. It might just be some blurry backlit vacation photos but it is original content.
They post but it doesn’t get read, all their friends feeds are just swamped with crap like theirs is.
Social media as a concept can definitely be fixed. Just stop doing algorithms, period.
Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Social media is meant to be a place where you get updates about the lives of people you follow. You would visit several times a day, read all new updates, maybe post your own, and that's it. The rest of the time, you would do something else.
> Stop pretending that people want to use social media for entertainment and news and celebrities
People actually want media, social and otherwise, for exactly that.
> Stop forcing content from outside of my network upon me.
There are social media apps and platforms that don't do that. They are persistently less popular. People, by and large, do want passive discovery from outside of their network, just like they do, in aggregate, want entertainment and news and celebrities.
> Make the chronological feed the only option.
Chronological by what? Original post? Most recent edit? Most recent response? Most recent reaction?
it will fix very little. The "problems" of social media are rooted in selfish human behavior, it's like a giant high-school. You can't "fix" that because it's ingrained in humans
You know the "retweet" feature on Twitter didn't originally exist? Before the feature was implemented, people would just write "RT" followed by the author username, then past in the text of the tweet they wanted to retweet.
There's nothing wrong with reposts that are made knowingly by people you follow. My issue is with current dominant social media platforms all focusing on forcing people to see content from outside of their network that they would've otherwise never seen, because neither them, nor the people they follow, would follow anything like that.
> Strip the platforms of any and all initiative. Make them dumb pipes. Stop pretending that people want to use social media for entertainment and news and celebrities. Stop trying to turn it into interactive TV. Stop forcing content from outside of my network upon me. Make the chronological feed the only option.
Congrats, you now have platforms no one will care about, as attention span gets sniped by competitors who want to maximize engagement and don't care about your arbitrary rules (ie, literally what happened 15 years ago).
> you now have platforms no one will care about
I would care, and I imagine there are others who would too. I don’t use social media anymore (at all!) because of this. If I could have the chronological feed restored and no intrusion of other content I’d redownload immediately. There must be a market for this.
There's plenty of options, take your pick. The latest one that I've been hearing about is https://retro.app, there are others.
The issue of course is that your friends won't be on it, most of them won't sign up even if you beg them, and most likely none of you will be using the service anymore 6 months from now.
What do you mean by "no one"? There is definitely demand for such a platform.
It's a dilemma.
There might be demand, but this "platform A" will be in competition with a dopamine-focused engagement "platform B" which also supports to host updates from "the lives of people you follow".
The majority of people will then have both installed but spend more time on "platform B" as it is actively engaging them.
Platform A will again end up in competition for user-attention with Platform B, as it needs money to operate their business etc.
Now if Platform A asks for a subscription fee to fund their non-engagement social media platform, how many of these users described above will pay and not simply prefer the free "platform B"?
How will such a churn affect users willing to PAY for "Platform A", if people whose "life they want to follow" have completely moved to "Platform B"?
Funny enough, as a European I could use WhatsApp as this "Platform A", as it has features to share status-updates, pictures etc. as part of your profile. Everyone I know has WhatsApp, noone is using those features.
So in essence, this Platform A already exists in Europe but doesn't work as "social media" because people don't engage with it...
> The majority of people will then have both installed but spend more time on "platform B" as it is actively engaging them.
And why would that be a problem? Most people also spend more time sleeping than using social media, so what? Let them be. Give them a tool that they would decide how to use best to suit their lifestyle.
> Platform A will again end up in competition for user-attention with Platform B, as it needs money to operate their business etc.
It would not, because it would not be run by a commercial organization. At this point I'm convinced that it's impossible for a sane social media platform to exist unless it's decentralized and/or run by a nonprofit. As soon as one touches venture capital, enshittification becomes only a matter of time.
>Stop trying to turn it into interactive TV.
wait are you talking about social media or sites that play videos?
I'm not talking about literal television, but more about something where you don't really get to decide what you see, you go there to get entertained with whatever.
For several years now, they’re mostly one and the same.
for people who dont know what theyre talking about, sure. but pedophiles and addictive videos are 2 completely different things and it would help if you defined which youre referring to
> Just stop doing algorithms
While we're at it, shall we stop storing data?
You know what I meant.
Why can't it be fixed? Just remove algorithms and show only subscribed content in chronological order. That's how most of the early platforms worked and it was fine.
Probably because there's no monetary incentive for that, so "can't". It would mean the big social media companies collapsing, because their entire raison d'etre at this point is mass-manipulation.
> It would mean the big social media companies collapsing
And what would be the downside of that? :D
why do you treat it like absolute voodoo? its a website that shows you videos, with the same algorithm theyve been using for 20 years. its now only a problem since basically iOS came out and now every single clueless non technical person is on the internet and discovering decade old memes for the first time.
I think it really is that simple. Have a discovery channel, recommendations side bar, just stop trying to add "shareholder value" through flawed machine learning attempts. Maintain a useful piece of software, is it too much to ask an earnings-driven corp? Probably.
stop using it then.
> That's how most of the early platforms worked and it was fine.
This is also how Mastodon works today, and it is fine.
Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Think of it this way: you’re hosting a party, and an uninvited stranger kicks the door open, then starts criticizing how you make your bed. That’s about what it feels like to try to “fix” social media.
I really liked the Circles feature in Google+: you defined groups of friends, and you could make your posts visible to particular groups.
They were not like group chats or subreddits, the circles were just for you, it was just an easy way to determine which of your followers would see one of your posts.
This kind of interaction was common in early Facebook and Twitter too, where only your friends or followers saw what you posted, often just whitelisted ones. It was not all public all the time. Google+ just made that a bit more granular.
I suppose that these dynamics have been overtaken by messaging apps, but it's not really the same thing. It's too direct, too real-time and all messages mixed-in, I like the more async and distributed nature of posts with comments.
Granted, if you really want a diverse discussion and to talk with everyone in the world at once, indeed that's a different problem and probably fundamentally impossible to make non-toxic, people are people.
It's not solved in real life. It's a huge ask that it should be solved on the internet.
Where are good discussions between really different viewpoints anywhere?
Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
> Whitelisting solves the problem for me. I curate every tweet I see with a browser extension. Strangers can't kick down the door. I only see content from my direct follows. It dramatically reduces the stress. Maybe a little like horse blinkers.
This extension? https://github.com/rxliuli/mass-block-twitter
How do you find new things to follow? If everyone did this it would be extremely rare to encounter new content.
We'd just go back to human curation, you'd whitelist a few curators you liked, people wanting to promote their content would email a link to a curator, if they thought their audience would like it they'd share it, you'd see it via your whitelist and if you liked the look of it you'd whitelist them.
So you are fighting against the platform that you're using. It reminds me about people constantly fighting with their own computer (Windows) to remove ads and crap. In both cases viable alternatives exist which don't require this huge effort.
Do you mind sharing this extension? I would prefer if it also shows the retweets of people you follow as that is an endorsement no matter what people say.
none of that is the fault of a website. thats been the case for millenia.
> Social media as a vessel for diverse discussion is a tall order. It’s too public, too tied to context, and ultimately a no-win game. No matter how carefully you present yourself, you’ll end up being the “bad guy” to someone. The moment a discussion touches even lightly on controversy, healthy dialogue becomes nearly impossible.
Worth reading Jaron Lanier's Ten Arguments for Deleting Your Social Media Accounts Right Now book:
https://www.amazon.com/Arguments-Deleting-Social-Media-Accou...
"Diverse discussion" is just something I don't want. Of course I've made up my mind about all kinds of things and I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before. Sure, tell me again why your religion or your conspiracy theory proves that the scientific consensus is a hoax. Maybe you'll convince me this time?
I don't mind Mastodon, but I'm pretty selective in who I follow, and diversity of opinions isn't one of my criteria.
> I don't really need to see opposing points of view as though they are novel thoughts that I've never considered before
I mean, that's fine, if you think that you can consider all conceivable angles thoroughly, by yourself. I for one welcome opposing views, but I suppose if my idea of that meant "religion or conspiracy theories" I'd probably be avoiding it too.
I follow people I can learn from, not people who try to convince me that everything I already know is wrong. I don't follow people who post misinformation, reject science, or who think that ad hominem attacks are a valid form of debate. There are a lot of them out there!
I still think old school linear forums are the best format for online discussion. They’re not perfect by any means, but I think they still beat all the alternatives I’ve tried.
The old school forums also centered around a single topic or interest, which I think helped keep things focused and more civil. Part of the problem with social media is that it wants to be everything for everyone.
I like old school forums with like an optional chat room for people to sync in real time if they want
The era of sites with a phpbb forum and an irc channel was really fun for me and I miss it a lot
I made lots of friends that way in the past, close friends, and it's unlike anything I've encountered since then with social media
The internet has become a primary battlefield for making money, and we can't go back to the days when it was just a non-commercial hobby that people enjoyed. To make money online, it's crucial to spread content as widely as possible, and the most effective methods for this are clickbait and ragebait. That's why the enshittification of the internet was inevitable.
This seems somewhat disproven by the existence of places like this? Strict moderation really does work wonders to prevent some of the worst behaviors.
Not that you won't have problems, even here, from time to time. But it is hard to argue that things aren't kept much more civil than in other spots?
And, in general, avoiding direct capital incentives to drive any questionable behavior seems a pretty safe route?
I would think this would be a lot like public parks and such. Disallow some commercial behaviors and actually enforce rules, and you can keep some pretty nice places?
I generally agree that strict moderation is the key but there's obviously a certain threshold of users and activity that is hit where this becomes unfeasible - ycombinator user activity is next to nothing compared to sites like Facebook/twitter/reddit. Even on Reddit, you see smaller subreddits able to achieve this.
But just like a public park, if 2 million people rock up it's going to be next to impossible to police effectively.
I'm skeptical of proving stuff about new-social-media with LLMs, because LLMs themselves are [presumably] trained on quite a bit of existing-social-media text.
A lot of talk goes into how Facebook or other social media use algorithms to encourage engagement, that often includes outrage type content, fake news, rabbit holes and so on.
But here's the thing ... people CHOOSE to engage with that, and users even produce that content for social media platforms for free.
It's hard to escape that part.
I remember trying Bluesky and while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different. Outlandish short posts, same lame jokes / pithy appeals to our emotions, and so on. People on there want to behave the same way they wanted to on Twitter.
> people CHOOSE to engage with that
In the same was a smoker "chooses" to engage with cigarettes. Let's not underestimate the fact that core human programming is being exploited to enable such behavior. Similar to telling a smoker to "just out the cigsreet down", we can't just suddenly tell people in social media to "stop being angry".
>people on [BlueSky] want to behave the same way they wanted to on Twitter.
Yes. Changing established habits is even harder to address. You can't make a horse drink (I'm sure anyone who ever had to deal with a disengaged captive audience feels this in their souls). Whike it's become many peoples primary "news source", aka the bread, most people came there for the circus.
I don't really have an answer here. Society needs to understand social media addiction the same way they understand sugar addictions; have it slammed in there that it's not healthy and to use sparingly. That's not something you can fix with laws and regulation. Not something you fix in even a decade.
> But here's the thing ... people CHOOSE to engage
Kinda, but they also don't really realise that they have much more control over the feed than they expect (in certain areas)
For the reel/tiktok/foryou-instagram feeds, it shows you subjects that you engage with. It will a/b other subjects that similar people engage with. Thats all its doing. continual a/b to see if you like what ever flavour of bullshit is popular.
Most people don't realise that you can banish posts from your feeds by doing a long press "I don't like this" equivalent. It takes a few times for the machine to work out if its an account, groups of accounts of theme that you don't like, and it'll stop showing it to you. (threads for example took a very long time to stop showing me fucking sports.)
Why don't more people know this? because it hurts short term metrics for what ever bollocks the devs are working on. so its not that well advertised. just think how unsuccessful the experiments in the facebook app would have been if you were able to block the "other posts we think you might like" experiments. How sad Zuckerberg would be that his assertion was actually bollocks?
There's definitely a mass of people who can't/won't/don't get past passive/least-effort relationships with things on screens. These would be the type that in the TV days would simply leave the TV on a specific channel all day and just watch whatever was on, and probably haven't changed their car radio dial from the station they set it to when they bought the car. In modern times they probably have their cable TV they still pay for on a 24 hour news channel and simply have that going all day.
To be fair, in times far past, you really didn't have much choice in TV or radio channels, and I suspect it's this demographic that tend to just scroll down Facebook and take what it gives without much thought other than pressing Like on stuff.
Yup. Knowing the exact percentage of those people would be hurtful to my soul I think, but I suspect they drive a meaningful percentage of business. Like that time when Netflix displayed shows on, because some people couldn't be bothered to actually choose something to watch ?
Maybe the TikTok algorithm is better, but the "I don't like this" action on Meta properties just blatantly does not work. I still get the same type of clickbait content no matter how many times I try to get rid of it. Maybe watching other types of Reels would do it, but no thanks.
Transparency would prove or disprove this. Release the algorithm and let us decide for ourselves. In my experience, Instagram made an algorithm change 3-4 years ago. It used to be that my feed was exactly my interests. Then overnight my feed changed. It became a mix of 1. interracial relationship success stories 2. scantily clad women clickbait, 3. east asian "craft project" clickbait, and just general clickbait. It felt as if "here's what other people like you are clicking on" became part of the algorithm.
>people CHOOSE to engage with that
brains are wired that way. Gossip and rage bait is not something that people actively decide for, it's subconscious. It's weird saying that this is the problem of individuals - propaganda is effective not because people are choosing to believe it.
Right. When we're talking about the scale of humanity itself, we've moved far past individual actions.
At the scale we're operating, if only 1% is susceptible to these algorithms, that's enough to translate to noticeable social issues and second-order effects.
And it's not 1%.
What gets me about some platform is all the text-in-images and video with senseless motion. I've been dipping my toes into just about any social where I could possibly promote my photography and the worst of them all is Instagram where all the senseless motion drives me crazy.
Yeah I miss geocities. The pages were ugly, but they were that users ugly ... gloriously personal ugly.
Facebook is not my page, it looks nothing like I want... my content is in many ways the least important thing featured.
Current social media have basically found the "bliss point" of online engagement to generate revenue and keep the eyes attached. These companies found a way to keep people hooked, and strong emotions seem to be a major tool.
It really isn't a choice. It is very accessible. Many friends are on social networks and you slowly get sucked into shorts. Then, it becomes an addiction as your brain crave the dopamine hits.
Similar to what Howard Moskowitz did with food.
Another way to put it is, social media is an unregulated drug.
Personally I really enjoy Mastodon and Bluesky but I am very deliberate at avoiding negative people, I do not follow and often mute or block “diss abled” people who complain about everything or people who think I make their life awful because I am cisgender or who post 10 articles an hour about political outrage. The discover page on Bluesky is algorithmic and respects the “less like this” button and last time I looked has 75% less outrage than the following page. (A dislike button that works is a human right in social media!)
Once I get my database library reworked, a project I have in the queue is a classifier which filters out negative people so I can speed follow and not add a bunch of negativity to my feed, this way I get to enjoy real gems like
https://mas.to/@skeletor
Cross posting that would cure some of the ills of LinkedIn!
> Bluesky
FWIW, I've been consistently posting quality stuff on Bluesky for the last year, and despite having a few hundred followers, I get ZERO engagement.
People in the Bluesky subreddit tell me it's not a "post and ghost" platform in that you have to constantly interact with people if you want to earn engagement, but that's too time consuming.
In other words, the discovery algorithm(s) on BlueSky sucks.
Maybe it doesn't suck. Others are just better at posting discoverable content than you. (note: "discoverable" =/= "engaging")
If we believe the discoverability algorithms to avoid "engagement" is respected, who would be more discoverable? The person coming in to show off one high quality article every 6 months, or the person doing weekly blogs with some nuggets of information on the same topic?
Maybe your article goes viral, but odds are that the weekly blogger will amass more followers, have more comments, and will build up to a point where they 99% of the time get more buzz on their updates than the one hit wonder.
It's just Twitter 2. It's the same as Twitter, made by the same people who made Twitter, doing the same thing as Twitter in the same way as Twitter, with the same culture as Twitter, plus a fig leaf to decentralisation.
> while I liked it better than Twitter, for me it was disappointing that it was just Twitter, but different
I feel exactly the same way.
I think there needs to be a kind of paradigm shift into something different, probably something that people in general don't have a good schema for right now.
Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit? But there's always these chicken and egg issues with adoption, who are early adopters, how that affects adoption, genuine UX-type issues etc.
Sounds like a return to old school, long term forums. They still exist, but there's a reason Reddit and Twitter took over the "forum space". They toom the core ideas and injected it with "engagement". In this case, with the voting system of reddit, and the follower system of Twitter. Gamefying the act of interacting with peope had effects beyond anyone's comprehension in 2007
> Probably something decentralized or federated is necessary in my opinion, something in between email and twitter or reddit?
So, Usenet? The medium is the message and all that, sure, but unless you change where the message originates you are ultimately going to still end up in the same place.
What do they mean "fixed"? Wasn't social media , from day one, about gossip , self-promotion and gaslighting? they excel at that so one would say they serve their purpose.
It's very misguided to pretend that social media mobs would replace "the press". There is a reason the press exists in the first place, to inform critically , instead of listening to hearsay.
ugh this again.
>Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?
they really pomp up what is effectively a message board (facebook, twitter) or a video website with a comment/message feature (youtube, tiktok) or an instant messenger with groups (whatsapp). NONE OF THIS IS NEW.
I'd like to see more software that amplifies local social interactions.
There are apps like Meetup, but a lot of people just find it too awkward. Introverts especially do not want to meet just for the sake of meeting people, so they fallback on social media.
Maybe this situation is fundamentally not helped by software. All of my best friendships organically formed in real-world settings like school, work, neighborhood, etc.
Indeed, you're describing the lack of a 3rd place. These days, maybe even the lack of a 2nd place as you graduate school and work is now fully remote. Without that societal push towards being in a public spot, many people will simply withdraw to themselves.
A third place would fix this, especially for men who need "things". You go to a bar for "thing" and if you meet some others to yell at sports with, bonus. We have less "things" for gen Z, and those things happen rather infrequently in my experience. I'm not sure if a monthly Meetup is quite enough to form strong bonds.
I ran a co-working space social club that resolved this issue for many introverts in 2015-2017.
This is at core a 3rd places issue, haven't had the capital to restart it post covid.
That sounds interesting. How did that work, did you rent a place for coworking and then opened it up for the social aspect?
My tech (I was founding engineer and CTO) company took over a co-working space and expanded it, we ran that portion of it at break even.
We intentionally set out to create a social club/co-working space. A lot goes into it. I'm a non-theist who comes from a multi generational group of theist church planters (like 100s of churches, just over and over), it's a multi factorial process with distinct transitions in space-time and community size, where each transition has to be handled so you don't alienate your communities founding members (who will be very different from later members) and still are able to grow.
People don't do it because they can't see the value while they are in the early mess of it. You have to like people to pull it off, you have to NOT be a high control person who can operate at high control at certain developmental stages. You have to have a moral compass everyone understands and you are consistent with, tech people like 0 trust. You have to create a maximum trust environment which means NOT extracting value from the community but understanding that the value is intrinsic in the community.
You have to design a space to facilitate work and play. It's not hard but you have to get everything right, community can't have mono culture and it must be enjoyable/uncomfortable, and you must design things so people can choose their level of engagement and grow into the community. It's easier once it has enough intertia that they understand they are building a thing with real benefits.
Even things like the flow of foot traffic within the space, chokepoints narrowing, these kinds of thing all effect how people interact.
I've been wanting to setup something like a 3rd place that tries only to break even. I'm unfortunately not a very social person.
Because these 3rd spaces are open to anyone and probably bringing people in from internet commmunties. What do you do when someone comes along and they're not breaking any rules but its clear that no one likes them? I've seen it drive entire groups away but because the person has done nothing wrong I cant/dont want to just say "fuck off kid no likes your weird ass"
I woul love to hear more about this. I am in need of a 3rd place, but unfortunately the only meetups around are sports or churches here. What did the members in your group do after you shut down? Were you open to members offering donations to keep your 3rd place going?
That's very interesting. Do you have time to elaborate a bit?
This isn't a technology problem. Technology can help accessibility, but fundamentally this is an on-the-ground, social coordination problem.
Functioning, welcoming, and well-ran communities are the only thing that solves this. Unfortunately, technology often makes this worse, because it creates such a convenient alternative and also creates a paradox of choice. I.e. people think "when there's 1000 meetups to check out, and this one isn't perfect, I'll just move onto the next one" when actually it's the act of commitment that makes a community good.
I call it technology for touching grass e.g. look at The Offline Club [0]
[0] https://www.theoffline-club.com/
really good article on that topic here https://www.lookatmyprofile.org/blog/social-media-apps-engin...
I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
There are many ways AIs differ from real people and any conclusions you can draw from them are limited at best -- we've had enough bad experiments done with real people
https://en.wikipedia.org/wiki/Stanford_prison_experiment#Int...
Appalling. The entire question of "fixing social media", for any definition of "fixing", involves not just the initial reaction to some change but the second-and-greater-order effects. LLMs are point-in-time models and intrinsically can not be used for even guessing at second-order effects of a policy over time. This shouldn't have gotten past the proposal phase.
I trust your judgement more than Ars Technica.
For us layman, the flaw of using AI trained on people for surveys is, human. Humans have a unique tendency to be spontaneous, wouldn’t you say?
How would a focus group research team approach this when they’re bombarded by AI solutions that want their research funds?
The worst problems with people these days seem to be they don’t pick up the phone. Probability-based polls are still pretty good about most things unless they involve Donald Trump —- it seems some Trump supporters either don’t pick up the phone or lie to pollsters. Some polls correct for this with aggressive weighting but how scientific it really is is up in the air.
Yeah. Sadly our phones for any unidentified number fell to spam. These days, if the message is important then they can leave a voicemail... And 90% of this voicemail reveal spam as well.
>unless they involve Donald Trump
A sense of shame perhaps. If you ask someone "how often do you brush your teeth" and compare it to more pragmatic testing, you see people have some sense of wanting to give the "right" answer. Even in a zero risk anonymous survey.
>I work in survey research and I'm rather appalled at how many people would rather survey a sample of AIs than a sample of people and claim they can come to some valid conclusion as a result.
A YC company just launched doing exactly that.
https://news.ycombinator.com/item?id=44755654
"We trained a model on Twitter and Reddit content and were shocked to discover it generates a terrible community."
It's so weird to live in a time when what you just said needs to be said.
Genuine question : are you scared for your job ? I see this tendency to use "synthetic personas" growing and frankly, having to explain why this sucks is insulting in itself. Decision makers are just not interested in having this kind of thought argument.
Yes and mostly No. No, because I work in games and I've seen enough people thinking that a "good game" just needs pretty graphics and a facsimile of "fun" to know that AI can't ever simulate this. Mkst Humans can't even seem to do it consistently, on all organizational levels.
But i have a footnote of "yes" because as you said, decision makers are just not interested in having this discussion about "focus on making fun games". So it will unfortunately affect my job in the short and even medium terms. Because so much of big money in games these days is in fact not focused on making a game, but on trying to either generate a gambling simulator, an engagement trap, or (you guessed it) AI hype. Both to try and claim you can just poof up assets, and to try and replace labor.
Knowing this, I do have long term plans to break out into my own indie route.
Not really. Sales is doing better than it ever has since I’ve been here. For one thing, AI folks want our data. Despite challenges in the industry, public opinion is more relevant than ever and the areas where we are really unsurpassed is (1) historical data and (2) the most usable web site, the latter one I am a part of.
It doesn’t surprise me if they found that the emergent behaviors didn’t change given their method. Modifying the simulation to make them behave differently would mean your rules have changed the model’s behavior to “jump tracks” into simulating a different sort of person who would generate different outputs. It’s not quite analogous to having the same Bob who likes fishing responding to different stimuli. Sort of like how Elon told Grok to be “unfathomably based” and stop caring about being PC” and suddenly it turned into a Neo-Nazi Chan-troll. Changing the inputs for an LLM isn’t taking a core identity and tweaking it, it’s completely altering the relationships between all the tokens it’s working with.
I would assume there is so much in the corpus based on behavior optimized for the actual existing social media we have that the behavior of the bots is not going to change because the bot isn’t responding to incentives like a person would it’s mimicking the behavior it’s been trained on and if there isn’t enough training data of behavior under the different inputs you’re trying to test you’re not actually applying the “treatment” you would think you are.
Wait what? Is there an article on this. That sounds absolutely insane.
Lots of them, for instance https://dl.acm.org/doi/10.1145/3708319.3733685
https://arxiv.org/abs/2508.06950 "Large Language Models Do Not Simulate Human Psychology" is a recent preprint.
The problem is people.
As a species we are greedy, self serving, and short sighted.
Social Media amplifies that, and we are well on our way to destroying ourselves.
This is true as individuals, but importantly as a society we have far more agency than sometimes it feels like when you watch us all acting out our own individual self-destruction.
Banning CFCs, making seatbelts a legal requirement, making drink driving illegal, gun control (in countries outside the USA), regulations on school canteens. These are all examples of coordination where we've solved problems further upstream so that individuals don't have to fight against their own greedy, self-serving, short-sighted nature.
We do have the ability to fix this stuff, it's just messy.
If you don’t think societies can be greedy, self serving, and short sighted I don’t know what to say.
We have raped this planet into a coma and our children will have to scrape together whatever remains when we are done.
I didn't mean to imply that, they definitely can be those things and far worse. But there are many examples of societal coordination that achieve the exact opposite of that (Scandinavian countries are of course the canonical example).
Things can change.
[dead]
> Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects.
I think this is expected. Think back to newsgroups, email lists, web forums. They were pretty much all chronological or maybe had a simple scoring or upvoting mechanism. You still had outrage, flamewars, and the guy who always had to have the last word. Social media engagement algorithms probably do amplify that but the dysfunction was always part of it.
The only thing I've seen that works to reduce this is active moderation.
There are some social media networks that promise to do so - for example https://izvir.org is one of them
Social media is a few people selling the data of many people looking at content made by some people selling something.
There is also research and promotion of values going on and the thing as a whole is entertaining and can be rigged or filtered on various levels by all participants.
It’s kind of social. The general point system of karma or followers applies and people can have a career and feeling of accomplishment to look back on when they retire. The cosmic rule of anything. too much, no good applies.
It’s not really broken but this is the age of idiots and monsters, so all bets are off.
I'm honestly really tired of having to read through so much bloat in these types of articles. They can't just elaborate exactly on the thing of the title ? They have to spend paragraphs writing stories ?
Any interesting work on using LLMs to moderate posts/users? HN is often said to be different because of its moderation, couldn't you train an LLM moderator on similar rules to reduce trolls, ragebait, and low effort posts at scale?
A big problem I see is users in good faith are unable to hold back from replying to bad faith posts, a failure to follow the old "don't feed the trolls rule".
> Ars Technica: I'm skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust.
You can't accuse them of hiding their bias and contradictions.
How can a single paper using a unproven (for this type of research) tech disprove such (alleged) skepticism.
People bending over backwards to do propaganda to harvest clicks.
Well you can't by definition fix something that is a rigged game. The social media exist to maximise the ad dollar, not to benefit you.
point-to-point communication between every human on Earth to every other human on Earth flattens communication hierarchies that used to amplify expertise and a lot of other behaviors. We created new hierarchies, but they are mostly demagogues pandering to the middle. Direct delegation is sort of like trying to process an image without convolution. Nobody knows what anyone else thinks, so we just trust that one neuron.
> They then tested six different intervention strategies...
None of these approaches offer what I want, and what I think a lot of people want, which is a social network primarily of people you know and give at least one shit about. But in reality, most of us don't have extended social networks that can provide enough content to consistently entertain us. So, even if we don't want 'outside' content (as if that was an option), we'll gravitate to it out of boredom and our feeds will gradually morph back into some version of the clusrterfucks we all deal with today.
Social media isn't the problem, people are the problem, and we still working on how to fix them.
I think that's an oversimplification. People have problems sure, but just like alcohol, social media can and does exacerbate them. The answer to dealing with the former is regulation. I'm not sure that is feasible for the latter.
I guess you could say the problem is that the wrong things are rewarded and amplified, but that just goes back to people.
Social media leveraging the billions spent on marketing over the years, the skills of knowledgeable experts in multiple disciplines, basically thousands of human years of experts at manipulating people against a random person with zero guard up that wants to chat with their friends/make new friends isn't a people problem.
Designing social media as a positive place was and continues to be a choice that no one is making. Because it's too damn profitable to make a hellhole / attention vacuum that people can't stop using.
If you could plug into the inner thoughts of millions of people around the world at once, it would not be pleasant.
Social media has turned out to basically be this.
> these platforms too often create filter bubbles or echo chambers.
I thought the latest research had debunked this and showed that the _real_ source of conflict with social media is that people are forced out of their natural echo-chambers and exposed to opinions that they normally wouldn't have to contend with?
> ...the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we're probably doomed...
No specific dynamics are named in the remainder of the article, so how are we supposed to know if they're "structurally embedded" in anything, let alone if we're doomed?
I'm reading Tim Urban's book titled "What's Our Problem".
It definitely explains the different types of thinking that I'm making up our current society, including social media. I haven't got to the part yet where he suggests what to do about it, but it's fascinating insight into our human behavior in this day and age.
I think this problem is partly due to greedly algos and party due to these sites being so large they have no site culture.
Site culture is what prevents mods from having to step in and sort out every little disagreement. Modern social media actively discourages site culture and post quality becomes a race to the bottom. Sure its harder to onboard new users when there are social rules that need to be learnt and followed but you retain users and have a more enjoyable experience when everyone follows a basic etiquette.
The main reason that it can't be fixed is that it has political or corporate operators and propaganda bots have taken over. There is always an agenda running through threads of social media even for mundane topics that seeking supremacy.
Social media can be fixed, its just the incentives are not aligned.
To make money, social media companies need people to stay on as long as possible. That means showing people sex, violence, rage and huge amounts of copyright infringements.
There is little advantage in creating real-world consequences for bad actors. Why? because it hurts growth.
There was a reason why the old TV networks didn't let any old twat with a camera broadcast stuff on their network, why? because they would get huge fines if they broke decency "laws" (yes america had/has censorship, hence why the simpsons say "whoopee" and "snuggle")
There are few things that can cause company ending fines for social media companies. Which means we get almost no moderation.
Until that changes, social media will be "broken"
> Social media can be fixed, its just the incentives are not aligned.
So social media can't be fixed. Incentives are what matter.
The US does not have a working legislature. It hasn't for possibly ~15 years.
But, if you think how closely network TV was regulated, by a government regulator, despite the power that those networks wielded (and against the incumbent radio and newspapers) We know it has happened.
The issue is that government in the USA has been dysfunctional for >30 years, not that regulation is ineffective.
Incentives can be changed though, through law.
Social media is the new smoking...
Widespread adoption before understanding risks - embraced globally before fully grasping the mental health, social, and political consequences, especially for young people.
Delayed but significant harm - can lead to gradual impacts like reduced attention span, increased anxiety, depression, loneliness, and polarization
Corporate incentives misaligned with public health - media companies design platforms for maximum engagement, leveraging psychological triggers while downplaying or disputing the extent of harm
Not an accurate analogy in my opinion, but close.
- Smoking feels good but doesn't provide any useful function.
- Some social media use feels good and doesn't provide any useful function, but social media is extremely useful to cheaply keep in touch with friends and family and extremely useful for discovering and coordinating events.
Fortunately the "keep in touch" part can be done with apps that don't have so much of the "social media" part, like Telegram, Discord, and even Facebook Messenger versus the main app.
I think most of the social media power users don't connect with friends and family at all through the platforms. Young Gen Zers just scroll Tiktok (or whatever clone they prefer) and share the ones they like through snapchat/discord/telegram/messenger/sms/whatsapp. Some will post stuff for their friends to see through "close friends" or whatever, but it's much less personal than it once was with Facebook groups and whatnot
Agreed. And it's not necessary when you have so many apps. They're using Tiktok for scrolling and Discord when they actually want to chat with their friends.
'Smoking get's me taking breaks, going outside more, and makes me more social chatting with my coworkers/others on smoke breaks'
[dead]
This analogy undersells the negative impact of social media. Smoking wasn't a propaganda machine at the hands of a few faceless corpos with no clear affiliation, for example, nor did it form a global spynet
Social media in a profit-seeking system can't be fixed. Profit-seeking provides the evolutionary pressure to turn it into something truly destructive to users. The only way it can work is via ownership by a benevolent non-profit. However, that would likely eventually give in to corruption if given enough time. Outlawing it completely, as well as regulating the algorithmic shaping of the online experience, is probably the inevitable future. Unfortunately, it won't come until the current system causes a complete societal facture and collapse.
If enough users are destroyed, advertisers (social media's real customers) won't have sufficient markets for their products, and profits will fall. Social media can't destroy its users and survive.
Seriously though, I disagree. Social media in a profit-seeking system can work if the users are the ones who pay. The easiest way for this to work-now that net neutrality is no longer a thing-is bundling through user's phone bills. If Facebook et al. were bundled similarly to how Netflix, Hulu and other streaming apps are now packaged with phone plan deals, then the users would be the focus, not the advertisers. This might require that social media be legislatively required to offer true ad-free options, though.
I think you're on the right track, but not getting to what I view as the logical conclusion: publicly funded options, free at the point of service to everyone. I've also humored the idea of taking it one level of abstraction further: a publicly funded cloud computing infrastructure, access to which is free (up to a level of usage). People could then choose to use these cloud computing resources to host, say, federated instances of open social networks.
I mean, it will never happen, but I think it's a path that resolves a lot of problems, and therefore a fun thought experiment.
Do all of these points apply to the traditional media funhouse mirror that we love to hate, too?
> "The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve."
I see illegal war, killing without due process, and kleptocracy. It's partly the media's fault. It's partly the peoples' fault for depending on advertising to subsidize free services, for gawking, for sharing without consideration, for voting in ignorance.
Social media reflects the people; who can't be "fixed" either.
If you're annoyed with all of these people on here who are lesser than and more annoying than you, then stop spending so much time at the bar.
Can the bar be fixed?
Sure can!
“No smoking, gambling, or loose women.”
TaDAaaah!
No loose men either.
Someone has to run the place.