Social Media ‘Addiction’ Verdict
Dangerous in more ways than one.

I found this week’s verdict finding Meta and YouTube liable for a young woman’s mental health issues problematic on a whole number of fronts. While there’s little doubt that the platforms are intentionally and expertly designed to create engagement and keep people glued to their screens, that’s hardly a secret.
Why, Lay’s has been bragging for decades about the addictive qualities of their potato chips. “Betcha can’t eat just one!” Yet, it would be absurd for them to be held liable for the ensuing obesity epidemic. People are responsible for their own choices.
But TechDirt’s Mike Masnick (“Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For“) outlines a concern that hadn’t really occurred to me.
First things first: Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust & safety, all while prioritizing growth metrics over user safety. If you’ve been reading Techdirt for any length of time, you know we’ve been critical of the company for years.
[…]
But if you care about the internet — if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things — these two verdicts should scare the hell out of you. Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.
[…]
For years, Section 230 has served as the legal backbone of the internet. If you’re a regular Techdirt reader, you know this. But in case you’re not familiar, here’s the short version: it says that if a user posts something on a website, the website can’t be sued for that user’s content. The person who created the content is liable for it, not the platform that hosted it. That’s it. That’s the core of it. It serves one key purpose: put the liability on the party who actually does the violative action. It applies to every website and every user of every website, from Meta down to the smallest forum or blog with a comments section or person who retweets or sends an email.
Plaintiffs’ lawyers have been trying to get around Section 230 for years, and these two cases represent them finally finding a formula that works: don’t sue over the content on the platform. Sue over the design of the platform itself. Argue that features like infinite scroll, autoplay, algorithmic recommendations, and notification systems are “product design” choices that are addictive and harmful, separate and apart from whatever content flows through them.
[…]
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
[…]
The whole point of Section 230 was to keep platforms from being held liable for harms that flow from user-generated content. The “design” theory accomplishes exactly what 230 was meant to prevent — it just uses different words to get there.
[…]
This is almost exactly the legal landscape that existed before Section 230 was passed in 1996, and the reason Congress felt it needed to act.
In the early 1990s, Prodigy ran an online service with message boards and made the decision to moderate them to create a more “family-friendly” environment. In the resulting lawsuit, Stratton Oakmont v. Prodigy, the court ruled that because Prodigy had made editorial choices about what to allow, it was acting as a publisher and could therefore be held liable for everything users posted that it failed to catch.
The perverse incentive was obvious: moderate, and you’re on the hook for everything you miss. Don’t moderate at all, and you’re safer. Congress recognized that this was insane — it punished companies for trying to do the right thing — and passed Section 230 to fix it. The law explicitly said that platforms could moderate content without being treated as the publisher or speaker of that content. And, as multiple courts rightly decided, this was designed to apply to all publisher activity of a platform — every editorial decision, every way to display content. The whole point was to allow online services and users to feel free to make decisions regarding other people’s content, including how to display it, without facing liability for that content.
And a critical but often overlooked function of Section 230 is that it provides a procedural shield: it lets platforms get baseless lawsuits dismissed early, before the ruinous costs of discovery and trial.
Presumably, most websites that include user-generated content (OTB included, since we have had a comments section for 23-plus years) would win in court against similar claims. But, as Masnick notes, the mere fact that this avenue now exists is ruinous:
Every design decision — moderation algorithms, recommendation systems, notification settings, even the order in which posts appear — can now be characterized by some lawyer as a “defective product” rather than an editorial choice about third-party content.
[…]
The real cost here is the process. The California trial lasted six weeks. The New Mexico trial lasted nearly seven. Both involved extensive discovery, depositions of top executives including Zuckerberg himself, production of enormous volumes of internal documents, and armies of lawyers on both sides.
Meta can afford that. Google can afford that. You know who can’t? Basically everyone else who runs a platform where users post things.
And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.
Eventually, Masnick merges his concern with mine:
We also need to talk about the actual evidence of harm in these cases, because it’s thinner than most people realize.
The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9, and that her social media use caused depression, self-harm, body dysmorphic disorder, and social phobia. Those are real and serious harms that genuinely happened to a real person, and no one should minimize her suffering.
But as [Eric] Goldman noted:
KGM’s life was full of trauma. The social media defendants argued that the harms she suffered were due to that trauma and not her social media usage. (Indeed, there was some evidence that social media helped KGM cope with her trauma). It is highly likely that most or all of the other plaintiffs in the social media addiction cases have sources of trauma in their lives that might negate the responsibility of social media.
The jury was asked whether the companies’ negligence was “a substantial factor” in causing harm. Not the factor. Not the primary factor. A substantial factor.
This standard is doing enormous work here, and nobody in the coverage seems to be paying attention to it. In most product liability cases, causation is relatively straightforward: the car’s brakes failed, the car crashed, the plaintiff was injured. You can trace a mechanical chain of events. There needs to be a clear causal chain between the product and the harm.
But what’s the equivalent chain here? The plaintiff scrolled Instagram, saw content that made her feel bad about her body, developed body dysmorphic disorder? Which content? Which scroll session? How do you isolate the “design” from the specific posts she saw, the comments she read, the accounts she followed?
With a standard that loose, applied to a teenager with multiple documented sources of trauma in her life, how do you disentangle what was caused by social media and what was caused by everything else? The honest answer is: you can’t. And neither could the jury, not with any scientific rigor. They made a judgment call based on vibes and sympathy — which is what juries do, but it’s a terrifying foundation for reshaping internet law.
The research on social media’s causal relationship to teen mental health problems is incredibly weak. Over and over and over again researchers have tried to find a causal link. And failed. Every time.
Lots of people (including related to both these cases) keep comparing social media to things like cigarettes or lead paint. But, as we’ve discussed, that’s a horrible comparison. Cigarettes cause cancer regardless of what else is happening in a smoker’s life. Lead paint causes neurological damage regardless of a child’s home environment. Social media is not like that. The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle.
And, also, neither cigarettes nor lead paint are speech. The issues involving social media are all about speech. And yes, speech can be powerful. It can both delight and offend. It can make people feel wonderful or horrible. But we protect speech, in part, because it’s so powerful.
But a jury doesn’t need to untangle those factors. A jury just needs to feel that a sympathetic plaintiff was harmed and that a deeply unsympathetic defendant probably had something to do with it.
Presumably, Meta and YouTube will appeal this verdict. Given the novel legal theory behind it, I haven’t the foggiest idea of how it will play out. But I agree with Masnick that the outcome here is perverse and potentially quite dangerous.
And now it becomes quite clear that the arguments for dismissing the Meta/YouTube verdict out of hand aren’t that convincing.
Sometimes, consumer products are designed as inherently unsafe, i.e., lacking insufficient consumer protection features, in order to to drive up profits.
That may be the case with social media as well.
Again, the parallel with gambling can be instructive. It is quite clear that a majority of problem gamblers (and substance abusers as well) are using gambling (or drugs) as a tool to regulate their emotions, either as a way to cope with underlying trauma or to deal with stress caused by life events.
Obviously, that doesn’t mean that gambling providers should not bear a certain responsibility to prevent or mitigate subsequent gambling-related harm among their customers.
Maybe it is not so strange, after all, to expect that social media companies implement consumer protection features into their products proportionate to the harm that their products could do to a significant subset of their customers.
Because let’s face it: social media are, in fact, designed to be addictive.
Why, then, let social media companies off the hook for the choices that they decided to make?
It’s probably more dangerous to have a system built around providing things like junk food and endless slop content. This verdict flows downhill from the way the world is, and will probably have adverse consequences but what exactly do people expect to happen?
I think we are lucky that way back in the 70s when the country still had some regard for the public smoking was deemed bad enough to ban ads from television, which was the beginning of the long end of smoking as this natural thing you can do anywhere. Like if the cigarette companies came into existence now, there would be Marlboro Juniors and the Trump administration would be entering into some joint venture with Philip Morris to hand them out at middle schools in the south.
@drj: Lots of products are potentially harmful.
I don’t blame gambling apps or casinos, which huge numbers of people enjoy responsibly, because some people don’t—or even can’t. Alcohol is classified as a carcinogen in the EU and we regulate its sale here. But I don’t blame Jim Beam if I wake up with a hangover.
The case I’ve seen most analogized to this one is that of Big Tobacco. Given that people have continued to smoke for generations after we begin putting warning labels on the packages, I’m not sympathetic to those who willingly risked the things they were warned about. But at least in that case, the companies did their best to hide dangers that they had inside knowledge of from the public.
@Modulo Myself: I think we have the policy around these things about right. Our schools educate children about nutrition and the like. We mandate warning and nutrition labels. We ban certain harmful additives. We make efforts to restrict certain activities to adults. But, by and large, we let people eat as many Twinkies and scroll through TikTok as long as they desire.
@James Joyner:
And that’s why we put warning labels on everything, instruct users on responsible use, etc. Do social media companies do anything like that?
Not even when a casino knowingly keeps taking some poor schmuck’s money when he is clearly agitated and chasing losses at 3 AM?
Of course, you can think that’s fine, but I suspect that’s rather out of step with how most people think – and certainly out of step with the mindset behind generally prevailing consumer protection regulations.
At least recognize that you are arguing from a minority position then.
@James Joyner:
Half of what educated Americans talk about centers on how unhealthy their relationships are with eating or their bodies or their phones, and this discourse is the most healthy thing about consumption. Every college professor I know is horrified by AI and how impotent their students are when faced with actual reading and writing. The miracle drug of the decade magically limits appetites. That’s all it does: makes you not want to eat crap. We sell comically large vehicles for no purpose whatsoever, vehicles so big they can’t even fit into parking spaces, and a third of the country doesn’t hunt but has a personal arsenal.
This is not a healthy way to live. It’s an incredibly negligent way to exist. Selling people crap and them shaming them for consuming it is twisted. It would be one thing if it created a better world, if the struggle was worth it in some Darwinian way. But it seems actually that the point is the endless cycle of consumption and then regret and loathing or, if you aren’t smart enough to understand the essence of the dilemma, total Trump-like oblivion, just circling the drain forever.
I confess to have no practical answers on how to deal with the system other than the ersatz ones I’ve found. But in no way do we have a good policies regarding personal freedom and consumption.