Social Media ‘Addiction’ Verdict
Dangerous in more ways than one.

I found this week’s verdict finding Meta and YouTube liable for a young woman’s mental health issues problematic on a whole number of fronts. While there’s little doubt that the platforms are intentionally and expertly designed to create engagement and keep people glued to their screens, that’s hardly a secret.
Why, Lay’s has been bragging for decades about the addictive qualities of their potato chips. “Betcha can’t eat just one!” Yet, it would be absurd for them to be held liable for the ensuing obesity epidemic. People are responsible for their own choices.
But TechDirt’s Mike Masnick (“Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For“) outlines a concern that hadn’t really occurred to me.
First things first: Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust & safety, all while prioritizing growth metrics over user safety. If you’ve been reading Techdirt for any length of time, you know we’ve been critical of the company for years.
[…]
But if you care about the internet — if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things — these two verdicts should scare the hell out of you. Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.
[…]
For years, Section 230 has served as the legal backbone of the internet. If you’re a regular Techdirt reader, you know this. But in case you’re not familiar, here’s the short version: it says that if a user posts something on a website, the website can’t be sued for that user’s content. The person who created the content is liable for it, not the platform that hosted it. That’s it. That’s the core of it. It serves one key purpose: put the liability on the party who actually does the violative action. It applies to every website and every user of every website, from Meta down to the smallest forum or blog with a comments section or person who retweets or sends an email.
Plaintiffs’ lawyers have been trying to get around Section 230 for years, and these two cases represent them finally finding a formula that works: don’t sue over the content on the platform. Sue over the design of the platform itself. Argue that features like infinite scroll, autoplay, algorithmic recommendations, and notification systems are “product design” choices that are addictive and harmful, separate and apart from whatever content flows through them.
[…]
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
[…]
The whole point of Section 230 was to keep platforms from being held liable for harms that flow from user-generated content. The “design” theory accomplishes exactly what 230 was meant to prevent — it just uses different words to get there.
[…]
This is almost exactly the legal landscape that existed before Section 230 was passed in 1996, and the reason Congress felt it needed to act.
In the early 1990s, Prodigy ran an online service with message boards and made the decision to moderate them to create a more “family-friendly” environment. In the resulting lawsuit, Stratton Oakmont v. Prodigy, the court ruled that because Prodigy had made editorial choices about what to allow, it was acting as a publisher and could therefore be held liable for everything users posted that it failed to catch.
The perverse incentive was obvious: moderate, and you’re on the hook for everything you miss. Don’t moderate at all, and you’re safer. Congress recognized that this was insane — it punished companies for trying to do the right thing — and passed Section 230 to fix it. The law explicitly said that platforms could moderate content without being treated as the publisher or speaker of that content. And, as multiple courts rightly decided, this was designed to apply to all publisher activity of a platform — every editorial decision, every way to display content. The whole point was to allow online services and users to feel free to make decisions regarding other people’s content, including how to display it, without facing liability for that content.
And a critical but often overlooked function of Section 230 is that it provides a procedural shield: it lets platforms get baseless lawsuits dismissed early, before the ruinous costs of discovery and trial.
Presumably, most websites that include user-generated content (OTB included, since we have had a comments section for 23-plus years) would win in court against similar claims. But, as Masnick notes, the mere fact that this avenue now exists is ruinous:
Every design decision — moderation algorithms, recommendation systems, notification settings, even the order in which posts appear — can now be characterized by some lawyer as a “defective product” rather than an editorial choice about third-party content.
[…]
The real cost here is the process. The California trial lasted six weeks. The New Mexico trial lasted nearly seven. Both involved extensive discovery, depositions of top executives including Zuckerberg himself, production of enormous volumes of internal documents, and armies of lawyers on both sides.
Meta can afford that. Google can afford that. You know who can’t? Basically everyone else who runs a platform where users post things.
And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.
Eventually, Masnick merges his concern with mine:
We also need to talk about the actual evidence of harm in these cases, because it’s thinner than most people realize.
The California plaintiff, known as KGM, testified that she began using YouTube at age 6 and Instagram at age 9, and that her social media use caused depression, self-harm, body dysmorphic disorder, and social phobia. Those are real and serious harms that genuinely happened to a real person, and no one should minimize her suffering.
But as [Eric] Goldman noted:
KGM’s life was full of trauma. The social media defendants argued that the harms she suffered were due to that trauma and not her social media usage. (Indeed, there was some evidence that social media helped KGM cope with her trauma). It is highly likely that most or all of the other plaintiffs in the social media addiction cases have sources of trauma in their lives that might negate the responsibility of social media.
The jury was asked whether the companies’ negligence was “a substantial factor” in causing harm. Not the factor. Not the primary factor. A substantial factor.
This standard is doing enormous work here, and nobody in the coverage seems to be paying attention to it. In most product liability cases, causation is relatively straightforward: the car’s brakes failed, the car crashed, the plaintiff was injured. You can trace a mechanical chain of events. There needs to be a clear causal chain between the product and the harm.
But what’s the equivalent chain here? The plaintiff scrolled Instagram, saw content that made her feel bad about her body, developed body dysmorphic disorder? Which content? Which scroll session? How do you isolate the “design” from the specific posts she saw, the comments she read, the accounts she followed?
With a standard that loose, applied to a teenager with multiple documented sources of trauma in her life, how do you disentangle what was caused by social media and what was caused by everything else? The honest answer is: you can’t. And neither could the jury, not with any scientific rigor. They made a judgment call based on vibes and sympathy — which is what juries do, but it’s a terrifying foundation for reshaping internet law.
The research on social media’s causal relationship to teen mental health problems is incredibly weak. Over and over and over again researchers have tried to find a causal link. And failed. Every time.
Lots of people (including related to both these cases) keep comparing social media to things like cigarettes or lead paint. But, as we’ve discussed, that’s a horrible comparison. Cigarettes cause cancer regardless of what else is happening in a smoker’s life. Lead paint causes neurological damage regardless of a child’s home environment. Social media is not like that. The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle.
And, also, neither cigarettes nor lead paint are speech. The issues involving social media are all about speech. And yes, speech can be powerful. It can both delight and offend. It can make people feel wonderful or horrible. But we protect speech, in part, because it’s so powerful.
But a jury doesn’t need to untangle those factors. A jury just needs to feel that a sympathetic plaintiff was harmed and that a deeply unsympathetic defendant probably had something to do with it.
Presumably, Meta and YouTube will appeal this verdict. Given the novel legal theory behind it, I haven’t the foggiest idea of how it will play out. But I agree with Masnick that the outcome here is perverse and potentially quite dangerous.
And now it becomes quite clear that the arguments for dismissing the Meta/YouTube verdict out of hand aren’t that convincing.
Sometimes, consumer products are designed as inherently unsafe, i.e., lacking insufficient consumer protection features, in order to to drive up profits.
That may be the case with social media as well.
Again, the parallel with gambling can be instructive. It is quite clear that a majority of problem gamblers (and substance abusers as well) are using gambling (or drugs) as a tool to regulate their emotions, either as a way to cope with underlying trauma or to deal with stress caused by life events.
Obviously, that doesn’t mean that gambling providers should not bear a certain responsibility to prevent or mitigate subsequent gambling-related harm among their customers.
Maybe it is not so strange, after all, to expect that social media companies implement consumer protection features into their products proportionate to the harm that their products could do to a significant subset of their customers.
Because let’s face it: social media are, in fact, designed to be addictive.
Why, then, let social media companies off the hook for the choices that they decided to make?
It’s probably more dangerous to have a system built around providing things like junk food and endless slop content. This verdict flows downhill from the way the world is, and will probably have adverse consequences but what exactly do people expect to happen?
I think we are lucky that way back in the 70s when the country still had some regard for the public smoking was deemed bad enough to ban ads from television, which was the beginning of the long end of smoking as this natural thing you can do anywhere. Like if the cigarette companies came into existence now, there would be Marlboro Juniors and the Trump administration would be entering into some joint venture with Philip Morris to hand them out at middle schools in the south.
@drj: Lots of products are potentially harmful.
I don’t blame gambling apps or casinos, which huge numbers of people enjoy responsibly, because some people don’t—or even can’t. Alcohol is classified as a carcinogen in the EU and we regulate its sale here. But I don’t blame Jim Beam if I wake up with a hangover.
The case I’ve seen most analogized to this one is that of Big Tobacco. Given that people have continued to smoke for generations after we begin putting warning labels on the packages, I’m not sympathetic to those who willingly risked the things they were warned about. But at least in that case, the companies did their best to hide dangers that they had inside knowledge of from the public.
@Modulo Myself: I think we have the policy around these things about right. Our schools educate children about nutrition and the like. We mandate warning and nutrition labels. We ban certain harmful additives. We make efforts to restrict certain activities to adults. But, by and large, we let people eat as many Twinkies and scroll through TikTok as long as they desire.
@James Joyner:
And that’s why we put warning labels on everything, instruct users on responsible use, etc. Do social media companies do anything like that?
Not even when a casino knowingly keeps taking some poor schmuck’s money when he is clearly agitated and chasing losses at 3 AM?
Of course, you can think that’s fine, but I suspect that’s rather out of step with how most people think – and certainly out of step with the mindset behind generally prevailing consumer protection regulations.
At least recognize that you are arguing from a minority position then.
@James Joyner:
Half of what educated Americans talk about centers on how unhealthy their relationships are with eating or their bodies or their phones, and this discourse is the most healthy thing about consumption. Every college professor I know is horrified by AI and how impotent their students are when faced with actual reading and writing. The miracle drug of the decade magically limits appetites. That’s all it does: makes you not want to eat crap. We sell comically large vehicles for no purpose whatsoever, vehicles so big they can’t even fit into parking spaces, and a third of the country doesn’t hunt but has a personal arsenal.
This is not a healthy way to live. It’s an incredibly negligent way to exist. Selling people crap and them shaming them for consuming it is twisted. It would be one thing if it created a better world, if the struggle was worth it in some Darwinian way. But it seems actually that the point is the endless cycle of consumption and then regret and loathing or, if you aren’t smart enough to understand the essence of the dilemma, total Trump-like oblivion, just circling the drain forever.
I confess to have no practical answers on how to deal with the system other than the ersatz ones I’ve found. But in no way do we have a good policies regarding personal freedom and consumption.
Sorry to quote someone who is quite out of fashion in these days when we’re all supposed to worship unrestrained capitalism and the billionaires it creates, but when I read JJ’s arguments here I just keep thinking about Lenin saying “The Capitalists will sell us the rope with which we will hang them.”
As long as some company can make a buck, we must all agree that there is nothing we can do as a society to stop them. The only thing that matters is “freedom,” here meaning the freedom to be manipulated into handing over all our money to one of these corporations so they can destroy our lives.
If I followed JJ’s belief system, I’d be outraged at the way the poor Sackler family is being persecuted.
I watched a news conference where a woman (maybe this girl’s mother) basically yelled that you can’t blame the mother or parents for their child’s usage of social media. Oh yes, I can. Absolutely. They are in charge of their children and their usage and tools. They can take them away. These are the same people who put TVs and games and laptops in their kid’s rooms, then complain they stay up too late. These are the same people who demand to be in charge of what schools teach their kids. They continually demand rights but abdicate responsibility. One goes with the other.
@James Joyner:
“But at least in that case, the [tobacco] companies did their best to hide dangers that they had inside knowledge of from the public”
The same appears to be true for the social media companies.
There is a lot of inertia behind treating addiction as a moral failure rather than a disease.
I think JJ is making an argument for drug legalization. It’s not the fault of the person offering the addictive substance, it’s the fault of the person taking the drug. And I agree in theory.
But then theory meets Human with disappointing results. Many years ago there was a big controversy in Massachusetts over mandatory helmets for motorcyclists. Why can’t the individual rider be responsible for their own choice? Huh? Huh? Well, because it’s a big drain on public resources when some jackass breaks his head open and the taxpayer ends up paying for Dr. Robbie to stuff his brains back in.
We can either be the kind of society that allows the biker into the ER, or we can be the kind of society that let’s him die in the road because: freedom. Society gets a vote because society bears the cost of self-destructive behavior.
The question is where and how to draw the line between the needs of the many, and the needs of the individual. I don’t know that this decision draws the line correctly, but I do believe the line has to be drawn. And I believe social media has been a hugely destructive force, far more destructive than illegal drugs, or helmetless bikers and that therefore society has a legitimate interest in defending itself.
@James Joyner:
You should.
While casinos in Vegas have brochures about problem gambling*, they do little when faced with customers with gambling problems. The only people they care enough about to get them to stop gaming, are either advantage players or those who happen to get on a winning streak.
Lose too much, take out a marker, lose that, and you’ll get free drinks and meals and maybe even a free room. Count cards, take advantage of a promotion, or simply get too lucky for too long, and you’ll be asked to leave the premises.
* The brochures are perfectly fine, informative, and offer useful advice and contacts for help and other resources. They’re also stashed rather out of the way, mixed in with the usual tourist crap brochures.
@Scott:
I mean, yes, and no.
I remember it being kind of a big deal when my kids turned 13 to let them have a Facebook account, but the FB of a decade and a half ago was a way to connect with family and friends. It evolved beyond that some time ago.
The increased usage of algorithms to push content and to keep you glued to your phone has clearly changed our relationship to these devices.
I heard a study yesterday that these days upwards of 41% of your FB feed is from unconnected accounts, i.e., not your friends, not things you purposefully liked, but things the algorithm thinks you want to see.
I also think that most people haven’t figured out that they aren’t the customers, they are the products.
I am more sympathetic than James is to this verdict, if anything, because it is pretty amazing to realize how easy it is to pick up your phone for one purpose and realize 15 or more minutes later that you allowed yourself to be sucked into a social media app
I think we need a reckoning with this technology, especially before AI takes it over.
Wait until they find out about booze.
@Michael Reynolds: Indeed. It is possible to allow the addictive thing to be available, legal, but also regulated.
There is also, like with the helmet law (or seat belts), a rather obvious public good to be derived from such regulations, even if individuals don’t like them.
I support section 230. I don’t agree with Masnick. These features are a problem. The Algorithm especially. It is because of it that Meta is liable. Users do not choose what they see. The “pictures of a blank wall” is a red herring.
People create content. Meta chooses it, via the Algorithm. People look at it. That’s what makes Meta liable. Meta is taking an active editorial role. Meta is not being neutral transmitter of everyone who wants to communicate with users. If these lawsuits destroy the use of The Algorithm on all platforms – good. Very good.
Another problem with this is that it makes social media like this a perfect vector for propaganda. Because the platform can learn who is receptive to certain messages and blast them with those messages. (Since they are paying for the privilege, Meta is fine with this.) Those messages are likely liable to be much more defamatory and counterfactual than something in a broadcast medium, because nobody who isn’t likely to be receptive ever sees the message. Nor will the people who did see the message ever be able to precisely identify who told them the crazy things they believe.
In contrast to Fox News, which can be sued for libel and lose big.
Perhaps it’s just a matter of time before someone uses this method to sue Meta for libel, having figured out how to intercept and collect these messages.
I am not 100 percent negative on AI. The key point is that the AI must be structured to serve the customer with active ongoing consent. If an AI helper is to be deployed to assist users in finding things they want to look at, it can be structured differently. It needs to be made to act on user’s behalf, not the Company’s behalf. Of course, the Company won’t spend nearly as much as they do, but it won’t go away. After all, Amazon regularly makes “people who bought this also bought …” suggestions. Nobody thinks that’s evil.
@Scott: “They continually demand rights but abdicate responsibility”
As clear a description of the American public as I’ve seen…