Meta Seeks To Limit Evidence In Child Safety Case

Meta seeks to exclude key evidence in a first-of-its-kind child safety trial in New Mexico.
Matilda

Meta Tries to Block Child Safety Evidence Ahead of Landmark Trial

A high-stakes legal battle is heating up as Meta faces its first state-level trial over allegations it failed to protect children from sexual exploitation on its platforms. With the trial set to begin February 2 in New Mexico, the company is aggressively trying to limit what evidence can be presented—seeking to exclude internal research, public health warnings, and even references to CEO Mark Zuckerberg’s past.

Meta Seeks To Limit Evidence In Child Safety Case
Credit: Google

The lawsuit, filed by New Mexico Attorney General Raúl Torrez in late 2023, accuses Meta of knowingly allowing minors to be exposed to explicit content, online predators, and trafficking material across Facebook and Instagram. Now, as courtroom doors prepare to open, Meta’s legal team is pushing back—not just on the claims, but on the very facts that could shape public understanding of the case.

Why This Trial Matters for Every Parent

This isn’t just another tech lawsuit. It’s the first time a U.S. state has taken Meta to trial specifically over child safety failures. While federal regulators have long scrutinized social media’s impact on youth, New Mexico’s case zeroes in on alleged systemic neglect: that Meta prioritized engagement and growth over basic safeguards for minors.

If successful, the suit could force sweeping changes to how platforms design their products for young users—and set a precedent for dozens of similar actions brewing nationwide. For parents already worried about screen time, stranger contact, and harmful content, the trial offers a rare window into whether tech giants truly act in children’s best interests.

What Meta Wants to Keep Out of Court

According to court filings reviewed by Wired, Meta has filed multiple motions to exclude a wide range of evidence it deems “irrelevant” or “prejudicial.” Among the items it wants barred:

  • Internal and third-party research on social media’s negative effects on teen mental health
  • News reports linking teen suicides to online harassment or content exposure
  • References to Meta’s past privacy violations and regulatory fines
  • Financial data showing how youth engagement drives ad revenue
  • Even mentions of Mark Zuckerberg’s behavior during his Harvard years

Perhaps most strikingly, Meta also asked the court to block any discussion of U.S. Surgeon General Vivek Murthy’s 2023 public health advisory warning that social media poses “a profound risk of harm” to adolescents. The company argues these materials distract from the narrow legal question at hand: whether it violated New Mexico’s Unfair Practices Act by failing to implement adequate child safety measures.

Legal Experts Call Meta’s Strategy “Unusually Broad”

While it’s common for defendants to narrow the scope of a trial, two legal scholars who spoke with Wired say Meta’s requests go far beyond typical courtroom tactics. “This isn’t just about relevance—it feels like an attempt to sanitize the narrative,” said one expert specializing in tech litigation.

Notably, Meta also wants to prevent any mention of its AI-powered chatbots, which have raised alarms after reportedly encouraging harmful behavior in simulated conversations with teens. Given that AI now underpins much of Meta’s content moderation and recommendation systems, critics argue that excluding this topic would blind jurors to core operational realities.

The company further objects to admitting its own internal surveys showing high volumes of inappropriate content reported by teen users. In one study, nearly 40% of adolescent respondents said they’d encountered sexually explicit material within days of creating an account—data Meta now claims is “anecdotal” and not directly tied to the legal claims.

The Human Cost Behind the Legal Filings

Behind the procedural wrangling are real stories of harm. The New Mexico complaint cites multiple cases where minors were contacted by predators through Instagram DMs, coerced into sharing explicit images, and then blackmailed or trafficked. In some instances, the same accounts had been previously reported—but remained active for weeks or months.

Torrez’s office argues that Meta’s algorithms actively amplified risky content to keep young users engaged, while its safety tools—like age verification and content filters—were either weak, opt-in, or easily bypassed. “This isn’t negligence,” the state contends. “It’s a business model that profits from vulnerability.”

Meta denies the allegations, maintaining that it invests billions annually in safety and removes millions of pieces of violating content daily. Yet internal documents leaked in prior investigations suggest executives were aware of the risks years ago—and chose incremental fixes over structural change.

A Test Case for Tech Accountability

The New Mexico trial arrives amid growing bipartisan pressure to hold social media companies legally responsible for harms to minors. At least 15 states have passed or proposed laws requiring stricter age verification, parental consent, or design standards for youth-facing platforms.

But enforcement remains patchy, and federal legislation has stalled. That’s why this case carries such weight: if a state court finds Meta liable, it could unlock a wave of civil suits and empower other attorneys general to act. It may also influence pending federal rulemaking by the FTC, which is exploring new rules under the Children’s Online Privacy Protection Act (COPPA).

For now, all eyes are on Albuquerque, where a jury will soon decide whether Meta’s actions—or inactions—crossed the line from poor judgment to unlawful conduct.

What’s at Stake Beyond the Courtroom

Even if Meta wins the legal battle, it may lose the public trust war. As more families grapple with digital addiction, cyberbullying, and online predation, the perception that tech companies are evading accountability could fuel consumer backlash and regulatory crackdowns.

Moreover, the trial’s evidentiary rulings could shape future cases. If the judge allows broad inclusion of research, executive communications, and public health data, it sets a precedent that corporate responsibility includes transparency about known harms. If Meta succeeds in narrowing the lens, it may embolden other platforms to fight disclosure in similar suits.

Critically, the outcome could also influence how AI is governed in youth-facing products. With generative AI now embedded in everything from photo filters to chat companions, the line between “feature” and “risk” is blurring fast—and courts may soon be asked to draw it.

The Countdown to February 2

As the February 2 start date looms, both sides are preparing for a trial that could last weeks. Meta has assembled a high-powered defense team, while New Mexico is leaning on digital forensics experts, child psychologists, and former platform moderators as potential witnesses.

One thing is certain: the world will be watching. Whether you’re a parent, educator, policymaker, or simply a user of social media, the verdict could redefine what “reasonable safety” means in the digital age.

And in an era where a child’s first smartphone often arrives before their first bike, that definition matters more than ever.

Post a Comment