Social Media ‘Addiction’ Verdict
Dangerous in more ways than one.

I found this week’s verdict finding Meta and YouTube liable for a young woman’s mental health issues problematic on a whole number of fronts. While there’s little doubt that the platforms are intentionally and expertly designed to create engagement and keep people glued to their screens, that’s hardly a secret.
Why, Lay’s has been bragging for decades about the addictive qualities of their potato chips. “Betcha can’t eat just one!” Yet, it would be absurd for them to be held liable for the ensuing obesity epidemic. People are responsible for their own choices.
But TechDirt’s Mike Masnick (“Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For“) outlines a concern that hadn’t really occurred to me.
First things first: Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust & safety, all while prioritizing growth metrics over user safety. If you’ve been reading Techdirt for any length of time, you know we’ve been critical of the company for years.
[…]
But if you care about the internet — if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things — these two verdicts should scare the hell out of you. Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.
[…]
For years, Section 230 has served as the legal backbone of the internet. If you’re a regular Techdirt reader, you know this. But in case you’re not familiar, here’s the short version: it says that if a user posts something on a website, the website can’t be sued for that user’s content. The person who created the content is liable for it, not the platform that hosted it. That’s it. That’s the core of it. It serves one key purpose: put the liability on the party who actually does the violative action. It applies to every website and every user of every website, from Meta down to the smallest forum or blog with a comments section or person who retweets or sends an email.
Plaintiffs’ lawyers have been trying to get around Section 230 for years, and these two cases represent them finally finding a formula that works: don’t sue over the content on the platform. Sue over the design of the platform itself. Argue that features like infinite scroll, autoplay, algorithmic recommendations, and notification systems are “product design” choices that are addictive and harmful, separate and apart from whatever content flows through them.
[…]
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
[…]
The whole point of Section 230 was to keep platforms from being held liable for harms that flow from user-generated content. The “design” theory accomplishes exactly what 230 was meant to prevent — it just uses different words to get there.
[…]
This is almost exactly the legal landscape that existed before Section 230 was passed in 1996, and the reason Congress felt it needed to act.
In the early 1990s, Prodigy ran an online service with message boards and made the decision to moderate them to create a more “family-friendly” environment. In the resulting lawsuit, Stratton Oakmont v. Prodigy, the court ruled that because Prodigy had made editorial choices about what to allow, it was acting as a publisher and could therefore be held liable for everything users posted that it failed to catch.
The perverse incentive was obvious: moderate, and you’re on the hook for everything you miss. Don’t moderate at all, and you’re safer. Congress recognized that this was insane — it punished companies for trying to do the right thing — and passed Section 230 to fix it. The law explicitly said that platforms could moderate content without being treated as the publisher or speaker of that content. And, as multiple courts rightly decided, this was designed to apply to all publisher activity of a platform — every editorial decision, every way to display content. The whole point was to allow online services and users to feel free to make decisions regarding other people’s content, including how to display it, without facing liability for that content.
And a critical but often overlooked function of Section 230 is that it provides a procedural shield: it lets platforms get baseless lawsuits dismissed early, before the ruinous costs of discovery and trial.
Presumably, most websites that include user-generated content (OTB included, since we have had a comments section for 23-plus years) would win in court against similar claims. But, as Masnick notes, the mere fact that this avenue now exists is ruinous:
Every design decision — moderation algorithms, recommendation systems, notification settings, even the order in which posts appear — can now be characterized by some lawyer as a “defective product” rather than an editorial choice about third-party content.
[…]
The real cost here is the process. The California trial lasted six weeks. The New Mexico trial lasted nearly seven. Both involved extensive discovery, depositions of top executives including Zuckerberg himself, production of enormous volumes of internal documents, and armies of lawyers on both sides.
Meta can afford that. Google can afford that. You know who can’t? Basically everyone else who runs a platform where users post things.
And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.
Eventually, Masnick merges his concern with mine:
We also need to talk about the actual evidence of harm in these cases, because it’s thinner than most people realize.
The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9, and that her social media use caused depression, self-harm, body dysmorphic disorder, and social phobia. Those are real and serious harms that genuinely happened to a real person, and no one should minimize her suffering.
But as [Eric] Goldman noted:
KGM’s life was full of trauma. The social media defendants argued that the harms she suffered were due to that trauma and not her social media usage. (Indeed, there was some evidence that social media helped KGM cope with her trauma). It is highly likely that most or all of the other plaintiffs in the social media addiction cases have sources of trauma in their lives that might negate the responsibility of social media.
The jury was asked whether the companies’ negligence was “a substantial factor” in causing harm. Not the factor. Not the primary factor. A substantial factor.
This standard is doing enormous work here, and nobody in the coverage seems to be paying attention to it. In most product liability cases, causation is relatively straightforward: the car’s brakes failed, the car crashed, the plaintiff was injured. You can trace a mechanical chain of events. There needs to be a clear causal chain between the product and the harm.
But what’s the equivalent chain here? The plaintiff scrolled Instagram, saw content that made her feel bad about her body, developed body dysmorphic disorder? Which content? Which scroll session? How do you isolate the “design” from the specific posts she saw, the comments she read, the accounts she followed?
With a standard that loose, applied to a teenager with multiple documented sources of trauma in her life, how do you disentangle what was caused by social media and what was caused by everything else? The honest answer is: you can’t. And neither could the jury, not with any scientific rigor. They made a judgment call based on vibes and sympathy — which is what juries do, but it’s a terrifying foundation for reshaping internet law.
The research on social media’s causal relationship to teen mental health problems is incredibly weak. Over and over and over again researchers have tried to find a causal link. And failed. Every time.
Lots of people (including related to both these cases) keep comparing social media to things like cigarettes or lead paint. But, as we’ve discussed, that’s a horrible comparison. Cigarettes cause cancer regardless of what else is happening in a smoker’s life. Lead paint causes neurological damage regardless of a child’s home environment. Social media is not like that. The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle.
And, also, neither cigarettes nor lead paint are speech. The issues involving social media are all about speech. And yes, speech can be powerful. It can both delight and offend. It can make people feel wonderful or horrible. But we protect speech, in part, because it’s so powerful.
But a jury doesn’t need to untangle those factors. A jury just needs to feel that a sympathetic plaintiff was harmed and that a deeply unsympathetic defendant probably had something to do with it.
Presumably, Meta and YouTube will appeal this verdict. Given the novel legal theory behind it, I haven’t the foggiest idea of how it will play out. But I agree with Masnick that the outcome here is perverse and potentially quite dangerous.
And now it becomes quite clear that the arguments for dismissing the Meta/YouTube verdict out of hand aren’t that convincing.
Sometimes, consumer products are designed as inherently unsafe, i.e., lacking insufficient consumer protection features, in order to to drive up profits.
That may be the case with social media as well.
Again, the parallel with gambling can be instructive. It is quite clear that a majority of problem gamblers (and substance abusers as well) are using gambling (or drugs) as a tool to regulate their emotions, either as a way to cope with underlying trauma or to deal with stress caused by life events.
Obviously, that doesn’t mean that gambling providers should not bear a certain responsibility to prevent or mitigate subsequent gambling-related harm among their customers.
Maybe it is not so strange, after all, to expect that social media companies implement consumer protection features into their products proportionate to the harm that their products could do to a significant subset of their customers.
Because let’s face it: social media are, in fact, designed to be addictive.
Why, then, let social media companies off the hook for the choices that they decided to make?
It’s probably more dangerous to have a system built around providing things like junk food and endless slop content. This verdict flows downhill from the way the world is, and will probably have adverse consequences but what exactly do people expect to happen?
I think we are lucky that way back in the 70s when the country still had some regard for the public smoking was deemed bad enough to ban ads from television, which was the beginning of the long end of smoking as this natural thing you can do anywhere. Like if the cigarette companies came into existence now, there would be Marlboro Juniors and the Trump administration would be entering into some joint venture with Philip Morris to hand them out at middle schools in the south.
@drj: Lots of products are potentially harmful.
I don’t blame gambling apps or casinos, which huge numbers of people enjoy responsibly, because some people don’t—or even can’t. Alcohol is classified as a carcinogen in the EU and we regulate its sale here. But I don’t blame Jim Beam if I wake up with a hangover.
The case I’ve seen most analogized to this one is that of Big Tobacco. Given that people have continued to smoke for generations after we begin putting warning labels on the packages, I’m not sympathetic to those who willingly risked the things they were warned about. But at least in that case, the companies did their best to hide dangers that they had inside knowledge of from the public.
@Modulo Myself: I think we have the policy around these things about right. Our schools educate children about nutrition and the like. We mandate warning and nutrition labels. We ban certain harmful additives. We make efforts to restrict certain activities to adults. But, by and large, we let people eat as many Twinkies and scroll through TikTok as long as they desire.
@James Joyner:
And that’s why we put warning labels on everything, instruct users on responsible use, etc. Do social media companies do anything like that?
Not even when a casino knowingly keeps taking some poor schmuck’s money when he is clearly agitated and chasing losses at 3 AM?
Of course, you can think that’s fine, but I suspect that’s rather out of step with how most people think – and certainly out of step with the mindset behind generally prevailing consumer protection regulations.
At least recognize that you are arguing from a minority position then.
@James Joyner:
Half of what educated Americans talk about centers on how unhealthy their relationships are with eating or their bodies or their phones, and this discourse is the most healthy thing about consumption. Every college professor I know is horrified by AI and how impotent their students are when faced with actual reading and writing. The miracle drug of the decade magically limits appetites. That’s all it does: makes you not want to eat crap. We sell comically large vehicles for no purpose whatsoever, vehicles so big they can’t even fit into parking spaces, and a third of the country doesn’t hunt but has a personal arsenal.
This is not a healthy way to live. It’s an incredibly negligent way to exist. Selling people crap and them shaming them for consuming it is twisted. It would be one thing if it created a better world, if the struggle was worth it in some Darwinian way. But it seems actually that the point is the endless cycle of consumption and then regret and loathing or, if you aren’t smart enough to understand the essence of the dilemma, total Trump-like oblivion, just circling the drain forever.
I confess to have no practical answers on how to deal with the system other than the ersatz ones I’ve found. But in no way do we have a good policies regarding personal freedom and consumption.
Sorry to quote someone who is quite out of fashion in these days when we’re all supposed to worship unrestrained capitalism and the billionaires it creates, but when I read JJ’s arguments here I just keep thinking about Lenin saying “The Capitalists will sell us the rope with which we will hang them.”
As long as some company can make a buck, we must all agree that there is nothing we can do as a society to stop them. The only thing that matters is “freedom,” here meaning the freedom to be manipulated into handing over all our money to one of these corporations so they can destroy our lives.
If I followed JJ’s belief system, I’d be outraged at the way the poor Sackler family is being persecuted.
I watched a news conference where a woman (maybe this girl’s mother) basically yelled that you can’t blame the mother or parents for their child’s usage of social media. Oh yes, I can. Absolutely. They are in charge of their children and their usage and tools. They can take them away. These are the same people who put TVs and games and laptops in their kid’s rooms, then complain they stay up too late. These are the same people who demand to be in charge of what schools teach their kids. They continually demand rights but abdicate responsibility. One goes with the other.
@James Joyner:
“But at least in that case, the [tobacco] companies did their best to hide dangers that they had inside knowledge of from the public”
The same appears to be true for the social media companies.
There is a lot of inertia behind treating addiction as a moral failure rather than a disease.
I think JJ is making an argument for drug legalization. It’s not the fault of the person offering the addictive substance, it’s the fault of the person taking the drug. And I agree in theory.
But then theory meets Human with disappointing results. Many years ago there was a big controversy in Massachusetts over mandatory helmets for motorcyclists. Why can’t the individual rider be responsible for their own choice? Huh? Huh? Well, because it’s a big drain on public resources when some jackass breaks his head open and the taxpayer ends up paying for Dr. Robbie to stuff his brains back in.
We can either be the kind of society that allows the biker into the ER, or we can be the kind of society that let’s him die in the road because: freedom. Society gets a vote because society bears the cost of self-destructive behavior.
The question is where and how to draw the line between the needs of the many, and the needs of the individual. I don’t know that this decision draws the line correctly, but I do believe the line has to be drawn. And I believe social media has been a hugely destructive force, far more destructive than illegal drugs, or helmetless bikers and that therefore society has a legitimate interest in defending itself.
@James Joyner:
You should.
While casinos in Vegas have brochures about problem gambling*, they do little when faced with customers with gambling problems. The only people they care enough about to get them to stop gaming, are either advantage players or those who happen to get on a winning streak.
Lose too much, take out a marker, lose that, and you’ll get free drinks and meals and maybe even a free room. Count cards, take advantage of a promotion, or simply get too lucky for too long, and you’ll be asked to leave the premises.
* The brochures are perfectly fine, informative, and offer useful advice and contacts for help and other resources. They’re also stashed rather out of the way, mixed in with the usual tourist crap brochures.
@Scott:
I mean, yes, and no.
I remember it being kind of a big deal when my kids turned 13 to let them have a Facebook account, but the FB of a decade and a half ago was a way to connect with family and friends. It evolved beyond that some time ago.
The increased usage of algorithms to push content and to keep you glued to your phone has clearly changed our relationship to these devices.
I heard a study yesterday that these days upwards of 41% of your FB feed is from unconnected accounts, i.e., not your friends, not things you purposefully liked, but things the algorithm thinks you want to see.
I also think that most people haven’t figured out that they aren’t the customers, they are the products.
I am more sympathetic than James is to this verdict, if anything, because it is pretty amazing to realize how easy it is to pick up your phone for one purpose and realize 15 or more minutes later that you allowed yourself to be sucked into a social media app
I think we need a reckoning with this technology, especially before AI takes it over.
Wait until they find out about booze.
@Michael Reynolds: Indeed. It is possible to allow the addictive thing to be available, legal, but also regulated.
There is also, like with the helmet law (or seat belts), a rather obvious public good to be derived from such regulations, even if individuals don’t like them.
I support section 230. I don’t agree with Masnick. These features are a problem. The Algorithm especially. It is because of it that Meta is liable. Users do not choose what they see. The “pictures of a blank wall” is a red herring.
People create content. Meta chooses it, via the Algorithm. People look at it. That’s what makes Meta liable. Meta is taking an active editorial role. Meta is not being neutral transmitter of everyone who wants to communicate with users. If these lawsuits destroy the use of The Algorithm on all platforms – good. Very good.
Another problem with this is that it makes social media like this a perfect vector for propaganda. Because the platform can learn who is receptive to certain messages and blast them with those messages. (Since they are paying for the privilege, Meta is fine with this.) Those messages are likely liable to be much more defamatory and counterfactual than something in a broadcast medium, because nobody who isn’t likely to be receptive ever sees the message. Nor will the people who did see the message ever be able to precisely identify who told them the crazy things they believe.
In contrast to Fox News, which can be sued for libel and lose big.
Perhaps it’s just a matter of time before someone uses this method to sue Meta for libel, having figured out how to intercept and collect these messages.
I am not 100 percent negative on AI. The key point is that the AI must be structured to serve the customer with active ongoing consent. If an AI helper is to be deployed to assist users in finding things they want to look at, it can be structured differently. It needs to be made to act on user’s behalf, not the Company’s behalf. Of course, the Company won’t spend nearly as much as they do, but it won’t go away. After all, Amazon regularly makes “people who bought this also bought …” suggestions. Nobody thinks that’s evil.
@Scott: “They continually demand rights but abdicate responsibility”
As clear a description of the American public as I’ve seen…
@Jay L. Gischer:
This. I miss old Facebook, which really was about connecting with people at a distance. I still use it for that, but also toy with taking a break from the app, or just giving up on it entirely.
And I will note that even though I am well aware that it is trying to suck me in, it is all too easy to get sucked in.
@Jay L. Gischer:
AI’s generative process isn’t neutral, it is/will be a billionaire-owned algorithm.
People are insane to outsource their thinking to Elon Musk or a similar money-hoarding degenerate.
ETA: the expected profits won’t be in the stuff that is long-term helpful to consumers.
BTW, if a lot of what shows up on your feed is not something you chose to see, like friends, family, publications or businesses you follow, but what the data mining company presents to you for whatever reason, then they are publishers.
NM resident here. This is what happens when 70s era consumer protection laws are used to sue to mitigate the perceived harms wrought by social media/tech giants. These consumer protection laws were written to protect against price-fixers, misleading ads, etc. These laws were NOT written to protect against addictive algorithms. I do not now if the “right” result was achieved, but I do know that the law is not a static husk, it is a slowly evolving animal. The arguments used by the NM AG may seem novel, even outlandish, but you prosecute lawsuits with the laws you have, not the laws you want to have.
@Michael Reynolds: I agree that there are products that should be regulated and, indeed, banned. Of course, we tried that with alcohol to disastrous results. And, while I don’t know that legalization of “hard” drugs is the right answer, we haven’t exactly managed to eradicate the scourge.
Indeed, to the extent we’ve allowed products to be legally available, it seems unreasonable to me that companies ought be legally liable for their misuse. If a product does not function as advertised—if, for example, a car’s brakes don’t work and cause injury or death—then, obviously, they should be held liable.
@Jay L. Gischer: @Steven L. Taylor: I do think the algorithm issue is interesting, as it is indeed clearly the company selecting content rather than merely being a repository of it. But I’m more dubious of the “Where did the hour go?!” argument.
Lay’s chips are engineered to make it hard to stop at just a handful. That still doesn’t make Frito-Lay responsible for my consumption habits. “Your chips are just too goddamn yummy!” should not be a valid legal claim.
@James Joyner:
But isn’t that the case here? We are told Facebook exists to connect people, that’s its ‘advertising.’ But as discussed upstream, that’s not how it functions. It has a secondary function – force-feeding content the user never asked for. The analogy would be, ‘eat this delicious chocolate bar, it’s very tasty and we are just not going to mention that it also gives you the shits.’
ETA: I suppose the problem could be solved with a disclosure similar to that appended to ads for medicines. “Facebook will connect you to friends. Side effects may include brainwashing.”
@James Joyner: If someone eats a Lays chip and gets sick, Lays is liable. Therefore, Lays does its utmost to take care that eating lots of Lays won’t make you sick unless it is “reasonable” that anybody eating that much will be sick.
I think the focus on “designed to be addicting” is another plank in “not a neutral party” which is what Section 230 rests on. It was conceived with forums in mind, and anybody, at any time, could post terrible stuff on a forum. But the forum owner was a neutral party.
The point of the focus on “design for engagement” is that Facebook is active in the process that damages people. They exercise editorial control, and they also do their utmost to keep readers engaged, and not necessarily with stuff that’s good for them. Not at all a neutral party. I think the legal argument is focused breaking the protection of Section 230 for this kind of situation, which was never considered when Section 230 was written.
@Jay L. Gischer:
Another wrinkle that pops up a lot is that The Algorithm isn’t a person, and so people argue that it cannot be making editorial decisions because The Algorithm isn’t human.
It’s bullshit, of course, but you will see it pop up in nearly every discussion.
It’s the same argument as “I didn’t crush orphans, I simply created an orphan crushing machine and placed it next to the orphans, with signs pointing to it that said ‘free candy’ — and look, there are safeguards, it uses facial recognition and consults databases to make sure it doesn’t crush non-orphans!”
Or “it isn’t racial gerrymandering, it’s political gerrymandering that just happens to ensure that black people will have the least representation possible.”
Which is to say that I think the Supreme Court would be inclined to rule in favor of orphan crushing machines.
ETA: Also, do Supreme Court Justices understand the Pigeon Hole Principle?
@Kathy: It’s like the anti-smoking campaigns that the tobacco industry used to run. Carefully crafted to say “we’re warning people” but also be the least persuasive advertising known to mankind.
No real comment other than “great discussion everyone.” Things like this are why I still read through all the comments.
@Steven L. Taylor:
I had just about given up on instagram for the same reasons. I gave it one last shot and checked the “turn off algorithm” box and it was instantly different. I saw stuff from people I was wondering what happened to. Within a day to two I started getting the information (raves, music announcements) that I had previously been missing.
How does meta stop that? The box only stays checked for 30 days.
@James Joyner:
This is an insane statement. Lay’s has intentionally created something to control your consumption. They are using information to override your own control.
You’re also substituting what you believe to be your willpower for actual existing reality. There is a massive qualitative difference between what social media companies are doing now vs how it used to be.
There is a massive difference between what the gambling companies are doing vs what you claim. Hell, I think gambling should be legal. But there’s a huge difference between driving down to the casino to place some bets vs the sports betting industry making it as easy as possible for you to give them money and then doing everything in their power to get back if you stop. We can have legal gambling. What we should have is a machine in our pockets that encourages addiction.
It doesn’t diminish or freedoms one bit to regulate or stop any of this. The reality is that you are arguing that the rich and powerful have the absolute unfettered right to screw us and we are absolutely barred from regulating them and unless we are absolutely perfectly in control of ourselves at all times then it’s our fault if we get fucked over.
Fucking throw off your shackles man.
@James Joyner:
I recently read a book on nutrition that laid out the decades long successful effort to reduce the influence of agribusiness lobbyists in favor of science in the food pyramid. An effort that has now been reversed. Red meat and tallow. Yum.
@DAllenABQ:
Indeed.
And this is why Donald J. Trump, 79, is president instead of Inmate 123456. His actions, like fomenting a “demonstration” to block certifying the election, are unprecedented. We don’t have laws that fit very well. We need to have a truth and reconciliation process, or de-MAGAfication, but we probably won’t.
@Gustopher: Interestingly enough, the AI bubble is probably destroying the “It’s just a machine” argument.
I am more aligned with James assuming I am understanding him and Musnick here correctly. The result of the verdict likely means that the internet will be dominated in the social media sphere by a handful of providers and everyone else disappears as the cost of defending against suits is too high. Everyone who has a troubled kid can claim that it was a deficiency in the algorithm of the social media they were using. But if I’m reading this correctly it wont limit things to social media and makes any online site vulnerable. It undercuts 230 and makes sites responsible for whatever is posted on their site if someone claims it harmed them. Just claim that the site was too appealing. As noted, the merits of a case dont matter much if you have a sympathetic victim.
I am not really a legal expert. If this can be limited explicitly to social media meaning stuff like Facebook or Twitter, then I am not so worried. I suspect there would be some downsides to having only a few very large social media entities but maybe the trade off is good.
Steve
@steve222:
Is that not already the case?
@James Joyner:
Anecdotes ain’t data, to be sure, but I am speaking from personal experience and from behavior I have observed in others (and that is widespread in the media). It is pretty clear to me that FB Reels has learned what I am likely to click on. And the broader phenomenon is certainly linked to phone usage period.
I think this analogy is problematic. Yes, if the chip could morph into the specific flavor and texture that I, personally, would find harder to put down, maybe you would have a point.
The algorithm is the issue, and as best as I understand it, they are engineering it to get people hooked. The better analogy is, therefore, tobacco.
Seems to me that lay opinions about legal rulings like this are often vibe driven (present company excluded of course).
I don’t like x.
Ruling punishes x (more specifically, producer of x).
Therefore ruling is good.
Totally understandable, as I too share the disdain for many of these tech leaders. And I too enjoy seeing them get smacked. And yet, I very much worry about unintended consequences.
More broadly, I align with Zadie Smith, though I do have a smartphone:
@Steven L. Taylor:
All analogies are.
But to add to this one, Lay’s does not place their product for free in homes, hotels, hospitals, offices, etc. They also don’t offer bags of chips that never run empty.
The actual big difference is that chips are a product, while “social” media is the machine mines data and delivers ads. the reason you pay for chips but not for Fakebook, is that to Lay’s you’re a consumer and to Meta you’re raw material.
Fakebook and the like have followed the enshitification curve. Once people go used to doom scrolling through a feed they cared about, it turns out the habit stuck even when the feed decayed into ads, unrelated content, etc.
Anecdotes, well, my Youtube feed tends to be mostly stuff I subscribe to and related content (mostly aviation, space, science, tech, cooking, entertainment, and economics). Now and then I know I’ve better things to do, for instance cooking, and I’ll still stay there scrolling for something to watch. I need to remind myself that I don’t need to watch another video like “50 movies that were box office poison,” or “I tried every burger in Dallas!”
I should edit my subscriptions. Remove all the things I’m really not interested in, dead channels, etc., and switch to the subscription feed when I’ve other things to do.
@Mimai:
I will cop to this, at least in part. It drives my general sympathy to the ruling, or at least to the notion that some level of judicial accountability is warranted.
I cannot speak to the actual quality of the ruling, and while I always have some concern about unintended consequences, that would have been true of an opposite ruling as well.
I have been, to the point of your behavior modification quote, increasingly convinced of the downside of social media, even if I am still a consumer thereof. And I do think that having a machine figure out the best way to frictionlessly feed people things they want to see can definitely have downsides and that some regulation regime is therefore warranted.
And I think that lawsuits like this, as a general matter, can often lead to such regs, and hence my vibing in a more positive direction than my co-blogging amigo.
@Kathy:
Sure, but some are worse (and better) than others!
Anecdote here…I sat next to a guy on a flight who was a psychologist who worked for a game making company. He told me that they monitor heart rate and pupil constriction in early game players to make the game as exciting and engaging as possible. I didn’t talk much, but I quietly noted that he drank four gin and tonics during a ninety minute flight. Was he making the game addictive? Did he understand that people (mostly teenage boys) are vulnerable to being entrapped? Is that why he drank?
@Steven L. Taylor:
Yes, but I like to say it once in a while…
@Steven L. Taylor: Not really. Here is a list of the more popular ones. There are lots of smaller ones. What happens if we reduce this to 5 or 6?
Facebook
Instagram
Twitter (X)
TikTok
YouTube .
WhatsApp
Snapchat
LinkedIn
Reddit
Pinterest
Telegram
Discord
BeReal
Threads
Bluesky
Steve
@steve222: Well, Meta owns Facebook, Instagram, Threads, and WhatsApp. So your list is shorter than you suggest. And I am or sure all of the ones you mention (BeReal?) are all that significant.
I think that the Meta sites plus TikTok plus X plus Reddit covers an awful lot of space. I mean, very few of those are really independent players. It isn’t like there is a massive competitive environment.