Zuckerberg is testifying in court. Internal documents show Meta knew its platforms harmed teens and chose growth anyway. A decade of evidence says this was never an accident.
Mark Zuckerberg sat in a Los Angeles courtroom last week, defending the claim that Instagram is not a defective product. Outside, bereaved parents held framed photos of children who died after encountering harm on the platform.
The landmark social media addiction trial represents over 1,600 families and school districts. The plaintiff at the center, a 20-year-old woman identified only as KGM, says she started compulsively using YouTube at six and Instagram at nine. Her use of the platforms worsened her depression and suicidal thoughts. TikTok and Snapchat settled before the trial began. Meta chose to fight.
Plaintiffs' attorney Mark Lanier told the jury during opening statements: "These companies built machines designed to addict the brains of children. And they did it on purpose."
Meta's defense? Social media doesn't meet the clinical definition of addiction, and parental controls are widely available.
This might be a reasonable argument if it weren't contradicted by Meta's own internal records.
The company's own words
The documents unsealed during trial proceedings paint a picture that no PR statement can erase.
In a 2016 internal email about Facebook's live video feature for teens, Mark Zuckerberg wrote that they would need to avoid notifying parents or teachers, because doing so would "ruin the product from the start."
Internal Meta research found that 55% of Facebook users showed signs of "mild" problematic use, while 3.1% met the company's threshold for "severe" problems. Zuckerberg himself responded to the finding by acknowledging that "3 percent of billions of people is a lot of people… it's millions of people." Yet the company published only the 3.1% figure, framing it as an "upper bound."
An Instagram researcher wrote internally that "IG is a drug" and "we're basically pushers," concluding that users' addiction was "biological and psychological" and that "the top down directives drive it all towards making sure people keep coming back for more."
These aren't allegations from outside critics. These are the company's own findings, produced by its own researchers, circulated among its own leadership.
And the company didn't fail to act. It acted in the wrong direction.
A pattern older than this trial
If this were an isolated incident, you could call it an oversight. But Meta's history of seeing the evidence and choosing to double down stretches back over a decade.
In 2018, the Cambridge Analytica scandal revealed that Facebook had authorized the applications that allowed personal data from up to 87 million profiles to be harvested and weaponized for political targeting. Christopher Wylie, the whistleblower who built the profiling system, described in his book Mindf*ck how the same techniques used to identify people vulnerable to extremist messaging were repurposed to push them toward it. Facebook treated this as a PR crisis, not a structural failure.
In 2025, Sarah Wynn-Williams published Careless People, a memoir of her six years as Facebook's director of global public policy. The book describes a corporate culture where growth targets overrode every other consideration. When Facebook's role in amplifying hate speech that contributed to atrocities against the Rohingya in Myanmar was raised internally, it was deprioritized. The company was focused on expansion.
Meta dismissed the book as "false accusations" from a fired employee and won a gag order preventing Wynn-Williams from promoting it. The publisher acknowledged that the memoir wasn't fact-checked to journalistic standards, and a thoughtful review by a former Meta employee pointed out that Wynn-Williams downplays her own role in the machine she criticizes. But the core claim, that Meta's leadership consistently prioritized growth over safety while knowing the consequences, is exactly what the trial evidence now supports.
The title she borrowed from The Great Gatsby fits: people who "smashed up things and creatures and then retreated back into their money or their vast carelessness."
What the research says (and what it doesn't)
Jonathan Haidt's The Anxious Generation makes the case that smartphones and social media, particularly Instagram, are restructuring adolescent development. The mechanisms he identifies are specific: social comparison loops, sleep disruption from late-night scrolling, the displacement of face-to-face interaction, and the fragmentation of sustained attention during critical developmental windows.
I want to be honest about the science here. Researchers like Candice Odgers and Andrew Przybylski have argued that the causal evidence linking social media to the teen mental health crisis is weaker than Haidt presents. The correlation exists. Whether social media causes harm or amplifies pre-existing vulnerability is still debated.
But here's what makes the academic debate somewhat beside the point: Meta's own researchers found evidence of harm, reported it internally, and the company chose to suppress those findings rather than act on them. You don't get to hide behind scientific uncertainty when your own data told you the answer and you buried it.
Even if you take the most conservative reading of the independent research, the precautionary principle alone should have driven different decisions from a company that generated $59.89 billion in revenue last quarter and posted $22.77 billion in net income. This isn't a startup without resources. This is one of the most profitable companies in history choosing not to spend the money on safety because safety costs engagement.
The system is working as designed
The trial is asking whether these platforms are "defective products." But the more accurate framing is that they're working exactly as designed.
The advertising infrastructure that Meta sells to businesses and the engagement mechanics that keep teenagers scrolling at 2 AM are not separate systems. They're the same system. The algorithm that optimizes for time-on-platform doesn't distinguish between a useful product discovery and a feed that's eroding a teenager's mental health. It measures engagement. That's it. And with advertising accounting for 97% of Meta's revenue, every product decision gets filtered through that metric.
So why can't a company with $200 billion in annual revenue just be decent?
The answer isn't that Meta employs bad people. It's that the advertising business model makes safety structurally expensive. Every feature that reduces time-on-platform reduces ad revenue. Every age gate that actually works shrinks the addressable user base. Every algorithm change that prioritizes wellbeing over engagement lowers the price advertisers pay.
Meta's own researchers told them this. The company chose revenue.
This is the same dynamic I explored when OpenAI introduced ads in ChatGPT. The advertising model has a gravitational pull that bends every product decision toward the same outcome. The user becomes inventory. Their attention becomes the product. And every quarter, the pressure to grow that inventory increases.
The question isn't whether Meta's leadership is indifferent to the consequences. The question is whether a company built on advertising can ever prioritize the wellbeing of the people it monetizes. Meta has had a decade of evidence, a genocide, a data weaponization scandal, a teen mental health crisis confirmed by its own research, and thousands of lawsuits to figure this out.
Their answer, every single time, has been the same. Profit.
Sam Altman called ads a "last resort" twenty months before introducing them. Mark Zuckerberg told Congress he was sorry about what families had experienced, then fought them in court. The pattern is consistent. The words change. The choices don't.
This trial will produce a verdict. But the verdict that matters most was delivered years ago, inside Meta's own offices, when researchers presented evidence of harm and leadership decided that growth came first.
That wasn't a mistake. That was a choice.
