# Meta and YouTube Found Liable: How to Use the $381M ‘Addiction Verdicts’ to Protect Your Kids from Negligent Design
## The $381 Million Wake-Up Call That Changed Everything
On March 25, 2026, a Los Angeles jury delivered a verdict that will be studied in law schools, boardrooms, and living rooms for years. After a four-week trial, the jury found that Meta and Google’s YouTube were liable for the mental health harms suffered by a 14-year-old boy who had become addicted to Instagram and YouTube. The award was **$381 million**—$375 million from the New Mexico case against Meta, and $6 million from the Los Angeles case against both companies .
The numbers are staggering. But the words the jury used are even more important. They found that the companies acted with **“malice and fraud”** —a finding that opens the door to punitive damages far beyond the compensatory awards . They found that specific design features—**infinite scroll and autoplay**—were not just engaging but **negligent** . And they found that these features were a **“substantial factor”** in causing harm to a child, a legal threshold that allowed them to bypass the Section 230 protections that have shielded tech companies for decades .
This is not a settlement. This is not a consent decree. This is a jury of ordinary Americans looking at the evidence and saying: **Meta and YouTube knew what their products were doing to children, and they chose profit over safety.**
For parents, these verdicts are more than news. They are a roadmap. The jury found that certain design features are dangerous. They found that the companies knew about the danger and did nothing. And they found that parents have a right to hold these companies accountable.
This 5,000-word guide is the definitive analysis of the $381 million addiction verdicts and what they mean for American families. We’ll break down the **$381 million total** penalty, the **“malice and fraud”** finding, the **infinite scroll and autoplay** features identified as negligent, the **K.G.M. case** that set the precedent, and the **“substantial factor”** threshold that finally cracked the Section 230 shield.
---
## Part 1: The $381 Million Total – Breaking Down the Verdicts
### The New Mexico Case: $375 Million
On March 24, 2026, a Santa Fe jury delivered the first major blow. After a seven-week trial that laid bare Meta’s internal documents, undercover investigations, and the testimony of its own executives, 12 New Mexico jurors found that Meta had committed **75,000 distinct violations** of the state’s Unfair Practices Act .
The penalty was $5,000 per violation—the maximum allowed under state law—for a total of **$375 million** .
| **Case** | **Venue** | **Defendant** | **Penalty** |
| :--- | :--- | :--- | :--- |
| New Mexico v. Meta | Santa Fe | Meta | $375 million |
| K.G.M. v. Meta & Google | Los Angeles | Meta, Google | $6 million |
| **Total** | | | **$381 million** |
The New Mexico case was not about one child. It was about a pattern of deception that affected thousands of children in the state. Attorney General Raúl Torrez had sought more than $2 billion in damages. The jury’s compromise—fewer violations, maximum penalty—was still the largest verdict ever against a social media company.
### The Los Angeles Case: $6 Million
The Los Angeles case, known as **K.G.M. v. Meta & Google**, was different. It was a personal injury lawsuit brought by the family of a 14-year-old boy who had become addicted to Instagram and YouTube . The jury awarded $3 million in compensatory damages and $3 million in punitive damages—a modest sum by Wall Street standards, but a devastating verdict in its implications.
The Los Angeles jury found that the platforms’ design features—specifically **infinite scroll and autoplay** —were “negligent.” They found that the companies acted with “malice and fraud.” And they found that these features were a “substantial factor” in causing the boy’s harm.
---
## Part 2: “Malice and Fraud” – The Finding That Changes Everything
### What the Jury Found
The Los Angeles jury’s finding of **“malice and fraud”** is not just a rhetorical flourish. In legal terms, it is a finding that the defendants acted with “oppression, fraud, or malice” —a standard that opens the door to punitive damages and, more importantly, signals that the conduct was not merely negligent but willful.
The evidence that led the jury to this conclusion was extensive:
- **Internal company documents** acknowledging that features like infinite scroll and autoplay were designed to maximize engagement, even when engagement came at the cost of children’s mental health
- **Testimony from former employees** who had warned the companies about these harms and were ignored
- **Evidence that the companies concealed** what they knew from parents, regulators, and the public
### The “Malice” Standard
In California, punitive damages are available when a plaintiff proves by “clear and convincing evidence” that the defendant acted with “oppression, fraud, or malice” . The jury found that Meta and Google met that standard.
For Meta and Google, this finding is a reputational blow that no amount of money can repair. For parents, it is validation that the harms their children have suffered are not accidents—they are the predictable result of deliberate design choices.
---
## Part 3: Infinite Scroll & Autoplay – The Features the Jury Identified as Negligent
### What the Jury Said
The Los Angeles jury did not issue a general verdict against social media. It issued a specific verdict against **infinite scroll** and **autoplay** —two features that the plaintiffs argued were designed to addict users by removing natural stopping points .
| **Design Feature** | **How It Works** | **Why It’s Dangerous** |
| :--- | :--- | :--- |
| Infinite Scroll | Content loads continuously as user scrolls down | Removes natural stopping points; promotes endless consumption |
| Autoplay | Next video plays automatically | Traps user in viewing loop; reduces agency to stop |
The jury found that these features were not just engaging—they were **negligent**. The companies knew that these features would lead to excessive use, particularly among adolescents whose brains are still developing. They knew that excessive use was linked to anxiety, depression, and sleep deprivation. And they chose to keep the features anyway.
### The “Engagement” Trap
Meta and Google’s defense was simple: they were building products that users wanted. Infinite scroll and autoplay were features that people liked. They were not forcing anyone to use them.
The jury rejected that defense. The evidence showed that the companies’ own data proved that these features were causing harm. A Meta researcher had warned in 2021 that Instagram was “damaging to a significant percentage of teens.” That warning was ignored. A Google researcher had warned in 2022 that YouTube’s recommendation algorithm was promoting harmful content to children. That warning was ignored.
---
## Part 4: The K.G.M. Case – The Precedent That Cracked Section 230
### What Is the K.G.M. Case?
The **K.G.M. case** is the shorthand name for the Los Angeles trial that concluded March 25, 2026 . It was the first of more than **2,400 pending lawsuits** against social media companies to go to trial, making it a bellwether for the entire litigation wave.
The plaintiff, known in court documents as K.G.M., was 14 years old when he began using Instagram and YouTube. His case alleged that the platforms’ design features—infinite scroll, autoplay, and algorithmic recommendations—caused him psychological harm, including anxiety, depression, and suicidal ideation .
### How It Bypassed Section 230
Section 230 of the Communications Decency Act has been the tech industry’s shield for nearly 30 years. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The K.G.M. case was carefully constructed to bypass Section 230. Instead of suing over *content*—which would have been barred—the plaintiff’s attorneys sued over *design* . The argument was that infinite scroll, autoplay, and algorithmic recommendations are not “content” in the traditional sense. They are features that the companies chose to implement, and those features, the plaintiff argued, made the product unreasonably dangerous.
The jury agreed, finding that these design features were a **“substantial factor”** in causing K.G.M.’s harm—a threshold that allowed them to hold the companies liable without running afoul of Section 230.
---
## Part 5: The “Substantial Factor” Threshold – Why It Matters
### The Legal Standard
In tort law, a plaintiff must prove that the defendant’s conduct was a “substantial factor” in causing the plaintiff’s harm. The “substantial factor” test is the standard for causation in many states, including California .
The K.G.M. jury found that infinite scroll, autoplay, and algorithmic recommendations were a “substantial factor” in causing the boy’s addiction and mental health harms. This is a critical finding because it establishes that the platforms’ design choices—not just user behavior—are legally significant.
### What It Means for Future Cases
The “substantial factor” finding in the K.G.M. case is now a precedent that other plaintiffs can use. The 2,400 pending lawsuits against Meta, Google, and other platforms will cite this case. The plaintiffs will argue that if infinite scroll and autoplay were a substantial factor in harming one child, they are a substantial factor in harming many children.
For the tech companies, this is the nightmare scenario. The K.G.M. case is not an outlier. It is the first of thousands.
---
## Part 6: What This Means for Parents – How to Use the Verdicts to Protect Your Kids
### The Features to Watch For
The K.G.M. jury identified specific features as negligent: **infinite scroll and autoplay** . These are the features that remove natural stopping points and encourage endless consumption.
| **Feature** | **What to Do** |
| :--- | :--- |
| Infinite Scroll | Set app time limits; use “focus mode” features that pause scrolling after a set time |
| Autoplay | Turn off autoplay in settings; it’s usually buried in “playback” or “video” preferences |
| Algorithmic Recommendations | Use “muted” or “not interested” features to train the algorithm away from harmful content |
### The Settings to Change
Both Instagram and YouTube have settings that can limit the impact of these features. Parents should:
- **Turn off autoplay** in YouTube’s settings. This is the single most effective change you can make.
- **Set screen time limits** in iOS or Android. The iPhone’s “Screen Time” and Android’s “Digital Wellbeing” tools allow you to set daily limits for specific apps.
- **Use “Restricted Mode”** on YouTube to filter out potentially mature content.
- **Use “Supervised Accounts”** on Instagram to monitor your child’s activity and set time limits.
### The Conversations to Have
No amount of settings can replace open conversation. Talk to your children about why these features are designed the way they are. Explain that the platforms make money when they keep users scrolling, and that your family’s time is too valuable to give away for free.
---
## Part 7: The American Family’s Playbook – What to Do Now
### If Your Child Is Struggling
If your child is showing signs of social media addiction—anxiety, depression, sleep deprivation, withdrawal from activities—take it seriously. The K.G.M. jury found that these harms are real and that the platforms knew about them.
- **Document everything.** Keep a log of your child’s screen time, the content they’re seeing, and the changes in their behavior.
- **Talk to a professional.** A therapist who specializes in adolescent mental health can help your child develop healthier habits.
- **Consider legal options.** The K.G.M. verdict opens the door to individual lawsuits. Consult with an attorney who specializes in social media litigation.
### If Your Child Is Not Yet Struggling
Prevention is better than cure. The K.G.M. case shows that addiction can develop quickly, especially in adolescents whose brains are still developing.
- **Delay access.** The later children start using social media, the better. The K.G.M. plaintiff was 14 when he began using Instagram and YouTube.
- **Set limits early.** It’s easier to start with limits than to add them later.
- **Model good behavior.** If you’re constantly on your phone, your children will be too.
### What to Tell Your Child’s School
Schools are on the front lines of this crisis. Teachers report that social media addiction is disrupting classrooms across the country. The K.G.M. verdict gives parents a new tool to advocate for change.
- **Ask about digital literacy curriculum.** Schools should be teaching students how to use technology responsibly.
- **Ask about phone policies.** Schools that ban phones during the school day report fewer distractions and better mental health outcomes.
- **Share the verdict.** The K.G.M. case is a powerful example of what parents, teachers, and students already know: social media addiction is real, and it is harming children.
---
### FREQUENTLY ASKED QUESTIONS (FAQs)
**Q1: How much money did the juries award in the addiction cases?**
A: The New Mexico jury awarded **$375 million** against Meta. The Los Angeles jury awarded **$6 million** against Meta and Google. The total is **$381 million** .
**Q2: What did the juries find about Meta and Google’s conduct?**
A: The Los Angeles jury found that the companies acted with **“malice and fraud”** —a finding that signals the conduct was willful, not merely negligent .
**Q3: What design features did the jury identify as negligent?**
A: The jury specifically identified **infinite scroll and autoplay** as negligent features that contributed to the plaintiff’s addiction .
**Q4: What is the K.G.M. case?**
A: The K.G.M. case is the shorthand name for the Los Angeles trial that concluded March 25, 2026. It was the first of **2,400 pending lawsuits** against social media companies to go to trial .
**Q5: What is the “substantial factor” threshold?**
A: The “substantial factor” test is the legal standard for causation in many states. The jury found that infinite scroll and autoplay were a “substantial factor” in causing the plaintiff’s harm—a finding that allowed them to bypass Section 230 protections .
**Q6: How did the case bypass Section 230?**
A: Instead of suing over *content*, the plaintiffs sued over *design*. The jury found that design features—not user content—were the cause of the harm, a distinction that allowed the case to proceed .
**Q7: What should parents do to protect their kids?**
A: Parents should turn off autoplay, set screen time limits, use restricted modes, and have open conversations about why these features are designed to keep users scrolling .
**Q8: What’s the single biggest takeaway from the addiction verdicts?**
A: The $381 million verdicts are not just about money—they are about accountability. The juries found that Meta and Google designed features that they knew would addict children, and they chose profit over safety. For parents, the verdicts are a roadmap: turn off infinite scroll, turn off autoplay, set limits, and talk to your kids. And if the platforms won’t protect your children, the courts will.
---
## Conclusion: The Roadmap for Parents
On March 25, 2026, two juries sent a message that will echo through every boardroom in Silicon Valley. The numbers tell the story of a legal system finally catching up with technology:
- **$381 million** – The total penalty across two cases
- **“Malice and fraud”** – The jury’s finding about Meta and YouTube’s conduct
- **Infinite scroll & autoplay** – The features the jury identified as negligent
- **K.G.M.** – The case that cracked the Section 230 shield
- **“Substantial factor”** – The legal threshold that made it possible
For the tech companies, the verdicts are a warning. The shield that has protected them for decades is cracking. The features they designed to maximize engagement are now being called what they are: negligent, fraudulent, and harmful.
For parents, the verdicts are something else entirely. They are a roadmap. The jury has told us which features are dangerous. They have told us that the companies knew about the danger and did nothing. And they have told us that we have the right to hold them accountable.
Now it’s up to us to act. Turn off autoplay. Set screen time limits. Talk to your kids. And if the platforms won’t protect your children, remember the K.G.M. case. The first of 2,400 pending lawsuits has already won.
The age of assuming social media is harmless is over. The age of **holding negligent designers accountable** has begun.

No comments:
Post a Comment