Key Takeaways
Powered by lumidawealth.com
- Zuckerberg faced questioning on internal discussions about increasing “time spent,” including a 2015 email targeting a 12% increase.
- Meta says it no longer gives teams explicit “time spent” goals and argues its intent is usefulness, not addiction.
- Plaintiffs highlighted internal estimates that millions of under-13 users were on Instagram in 2015; Meta says under-13 accounts are not allowed and are removed when found.
- Trial focus is on product design (autoplay, infinite scroll, recommendations) to try to bypass traditional legal shields for user-posted content.
- Beauty filters resurfaced as a flashpoint: Meta says it stopped creating/recommending them but resisted broad bans as “overbearing.”
What Happened?
Meta CEO Mark Zuckerberg testified in Los Angeles Superior Court during a major trial over alleged harms tied to social-media product design on platforms including Facebook and Instagram. A plaintiffs’ attorney pressed Zuckerberg on internal communications and metrics related to engagement—especially the company’s historic emphasis on increasing time spent. Zuckerberg said Meta used to set time-spent goals but doesn’t anymore, and that growth targets were intended to make products more useful rather than addictive. Plaintiffs also presented internal material indicating Meta employees knew children under 13 were using Instagram; Zuckerberg reiterated the platform prohibits under-13 users and removes accounts when identified. The case is one of the first in a wave of lawsuits seeking to hold social platforms accountable for features such as algorithmic recommendations, autoplay, and infinite scroll.
Why It Matters?
This trial is a stress test for a legal strategy that targets platform “design and mechanics” rather than specific user content—an approach that could reshape liability risk for social-media and recommendation-driven products. If plaintiffs succeed, it could pressure companies to redesign engagement loops, alter recommendation systems, expand age-verification practices, and tighten default settings for younger users. It also lands at a moment when Meta is publicly pivoting toward AI-led growth—raising the stakes for how courts and regulators view algorithmic optimization of attention.
What’s Next?
The trial is expected to run for weeks. More internal documents and testimony (including from product leaders) will likely center on how Meta measured engagement, how it evaluated youth safety tradeoffs, and what changes it did—or didn’t—make after internal and external warnings. In parallel, similar cases in other states (including actions by state officials) could accelerate regulatory responses and increase pressure for industry-wide standards around youth protections, product defaults, and algorithmic transparency.















