You Can’t Libel The Dead—But Don’t Deepfake Them

You Can’t Libel The Dead. But That Doesn’t Mean You Should Deepfake Them.

Zelda Williams has a message that hits harder than any viral clip: you can’t libel the dead, but that doesn’t mean you should deepfake them.

You Can’t Libel The Dead—But Don’t Deepfake Them

Image Credits:Curly_photo / Getty Images

The daughter of the late Robin Williams took to Instagram with a heartfelt plea to fans:

“Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I’ll understand. I don’t and I won’t,” she wrote. “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop.”

Her frustration comes right after OpenAI’s launch of Sora 2, a video model capable of generating stunningly realistic deepfakes. The new Sora social app allows users to create AI-generated clips of themselves, friends, or even long-deceased icons—raising serious ethical questions about consent and legacy.

The Legal Gray Area: You Can’t Libel The Dead

According to the Student Press Law Center, defamation laws don’t protect the deceased. That means, technically, creating or sharing a deepfake of a dead person doesn’t count as libel.

But just because it’s legal doesn’t make it right. As Williams points out, reviving someone through an AI-generated likeness without consent can feel exploitative and deeply disrespectful—especially when it reduces their memory to algorithmic puppetry.

Sora’s Limits—and Loopholes

OpenAI claims that Sora 2 won’t let users deepfake living individuals unless they’ve given explicit permission, known as a “cameo.” However, the same rule doesn’t apply to the dead.

That’s why the app has been flooded with AI videos of Martin Luther King Jr., Franklin D. Roosevelt, Richard Nixon, John Lennon, Bob Ross, Alex Trebek, and yes, Robin Williams.

Strangely, the system seems inconsistent. TechCrunch tests found that Sora 2 refused to generate clips of Jimmy Carter or Michael Jackson, yet allowed others like Robin Williams. This inconsistency underscores how blurred the ethical line has become.

Why Deepfaking the Dead Feels Wrong

Even with cutting-edge AI, deepfakes of dead celebrities raise uncomfortable questions about control, dignity, and profit.

Unlike living people who can set parameters for their digital likeness through OpenAI’s cameo feature, the deceased have no voice in how they are portrayed. As Williams noted:

“To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough,’ just so other people can churn out horrible TikTok slop puppeteering them is maddening.”

Her statement captures what so many feel: AI-generated tributes often cross into exploitation.

The Ethical Responsibility of AI Creators

OpenAI didn’t respond to TechCrunch’s request for comment about deepfaking the dead. But legal silence doesn’t equal moral clarity.

While you can’t libel the dead, technology companies still carry a duty of care. If AI tools can resurrect someone’s image or voice, they also need ethical guardrails—to prevent misuse, protect legacy, and respect the emotions of surviving family members.

Just Because You Can, Doesn’t Mean You Should

The debate over AI-generated likenesses isn’t just about copyright or legality—it’s about human decency.

Deepfaking someone who can no longer consent, even if it’s technically lawful, risks turning real people’s lives into digital caricatures.

So yes, you can’t libel the dead, but that doesn’t mean you should deepfake them. Sometimes, the most ethical choice is to let memory rest in peace.

Post a Comment

Previous Post Next Post