Opinion  | 

Child Pornography Laws Have a Major Flaw

Former tech lawyer writes that online companies need to be able to test without getting arrested
Posted Jan 12, 2026 8:26 AM CST
Our Laws on Curbing Child Pornography Are Flawed
   (Getty/tadamichi)

Elon Musk's AI chatbot has been allowing users to undress people—sometimes children—to produce deepfake porn images. Amid the ensuing outrage, a former tech lawyer points out what she sees as a major problem in the fight to keep AI-generated sexual images of children from surfacing online. The key, writes Riana Pfefferkorn, is preventing AI models from producing such images in the first place, not waiting until after the fact. The only way to do that is through rigorous testing: Figure out how the models can be manipulated, then shut down those avenues. "But current laws don't adequately protect good-intentioned testers from prosecution or correctly distinguish them from malicious users, which frightens companies from taking this kind of action," writes Pfefferkorn.

It's time for Congress to create a national law that protects AI developers who test their models in such ways, "without fear of being caught in a trap." It may seem like common sense, "but we've been here before," notes Pfefferkorn. It wasn't until after major hacks by Russian adversaries that the Justice Department in 2022 protected "good-faith" cybersecurity researchers. "The same dynamic is happening now" with AI, she writes.

  • "We cannot afford years of government inaction again. Congress should hold immediate hearings about the Grok debacle, and it should get to work on a legal safe harbor for responsibly testing A.I. models for child sexual abuse material. Companies like (Musk's) xAI could be doing more to make their models safer, and there is no time to waste." Read the full column.

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X