A Writer Is Suing Grammarly For Turning Her And Other Authors Into ‘AI Editors’ Without Consent

Grammarly's "Expert Review" AI feature used real writers' names without consent.
Matilda

Grammarly is facing a class action lawsuit after launching an AI feature that impersonates real journalists, authors, and scientists — without ever asking them first. Journalist Julia Angwin filed the suit against Superhuman, Grammarly's parent company, alleging violations of privacy and publicity rights. The case is already drawing wide attention, and it could reshape how AI companies use real people's identities in their products.

A Writer Is Suing Grammarly For Turning Her And Other Authors Into ‘AI Editors’ Without Consent
Credit: Moor Studio / Getty Images

What Is Grammarly's "Expert Review" Feature — and Why Is It Sparking Outrage?

Last week, Grammarly quietly rolled out a feature called "Expert Review," available only to subscribers paying $144 per year. The tool promises users editorial feedback written in the style of celebrated thinkers — novelist Stephen King, the late scientist Carl Sagan, and tech journalist Kara Swisher, among hundreds of others.

The catch? Not a single one of them agreed to it.

Grammarly never sought consent from the experts whose names, reputations, and professional identities it co-opted. Instead, it trained AI to simulate their voices and sold that simulation as a premium product — banking on the credibility these individuals have spent careers building.

The backlash was swift. Writers, journalists, and ethicists called it a brazen misuse of personal identity. And within days, a lawsuit had been filed.

Julia Angwin's Class Action Lawsuit: The Case at the Center of It All

Angwin, an award-winning investigative journalist with a long history of exposing Big Tech's privacy violations, found herself on the receiving end of exactly the kind of corporate overreach she has spent years documenting.

"I have worked for decades honing my skills as a writer and editor," she said in a statement, "and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise."

Her class action lawsuit opens the door for every other affected writer to join the case. That number could be significant — Grammarly reportedly included hundreds of experts in the feature without their knowledge.

The legal claims center on violations of privacy rights and publicity rights, the latter of which protects individuals from having their name, voice, or likeness used commercially without permission. Courts have increasingly treated these rights seriously in the age of AI, making this case one to watch closely.

The Irony Is Hard to Miss

Perhaps no detail in this story is more striking than who ended up on Grammarly's list.

Among those impersonated: Timnit Gebru, one of the world's most prominent AI ethicists and a fierce critic of the harms that poorly regulated AI systems can cause. The fact that Grammarly used her identity — without consent — to sell an AI product is, at minimum, a staggering failure of self-awareness.

Angwin herself has dedicated a significant portion of her career to holding tech companies accountable for exactly these kinds of privacy violations. To find her own name enrolled, without her knowledge, in an AI product she never endorsed is the kind of real-world irony that no fiction writer could get away with.

Critics argue this is what happens when AI development races ahead of ethical guardrails. Companies focus on what the technology can do, and skip the harder question of what it should do.

Does the Feature Even Work? The Results Are Embarrassing

Beyond the ethical breach, there is a practical question worth asking: does Grammarly's "Expert Review" actually deliver valuable feedback?

The answer, based on early tests, is a resounding no.

Casey Newton, founder of the widely read tech newsletter Platformer — and another person Grammarly impersonated — ran one of his own articles through the tool. He asked for feedback in the style of Kara Swisher, a veteran tech journalist whose sharp, no-nonsense analysis has defined technology coverage for decades.

What he got back was this: "Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?"

Generic. Toothless. The kind of comment that could apply to almost any technology article ever written. It bears no resemblance to Swisher's actual editorial instincts or voice.

This raises an obvious question: if the impersonation is this hollow, why did Grammarly even bother attaching real names to it? The answer, critics say, is credibility laundering — borrowing the reputational weight of respected names to dress up a mediocre product.

What This Means for AI Companies Using Real People's Names

The Grammarly lawsuit arrives at a pivotal moment for the AI industry. Courts across several jurisdictions are beginning to grapple with where AI companies' rights to train and deploy models end, and where individuals' rights to their own identity begin.

Publicity rights law — the legal doctrine at the heart of Angwin's case — has historically been applied to commercial uses of a person's likeness in advertising. But the rise of generative AI has created new pressure to expand and clarify those protections. If an AI product simulates your editorial voice and sells that simulation to paying subscribers, does that constitute commercial exploitation of your identity? Angwin's lawyers argue it clearly does.

A win in this case could set a meaningful precedent. It might require AI companies to obtain explicit consent before including real individuals in features that simulate their expertise, style, or personality. That would be a significant structural shift for an industry that has largely operated on the assumption that everything is fair game until a court says otherwise.

Why Writers and Creators Are Paying Close Attention

For journalists, authors, and creative professionals, this case feels deeply personal. A person's voice — their writing style, their editorial judgment, their hard-won perspective — is not just an asset. It is their professional identity.

The concern is not merely theoretical. If AI companies can package a simulation of your expertise and sell it without asking, the implications ripple outward. Readers may form impressions of your views based on AI-generated content you never produced. Employers may evaluate candidates based on AI feedback attributed to you. Your reputation becomes something you no longer fully control.

Writers' organizations have increasingly pushed for stronger legal protections in this area. This lawsuit may give those efforts new momentum — and a high-profile test case to point to.

What Happens Next

Grammarly has not yet issued a detailed public response addressing the consent issue directly. The lawsuit is in its early stages, and it may be months or years before any ruling comes.

But the reputational damage is already accumulating. A feature designed to signal premium quality has instead become a symbol of the AI industry's ongoing struggle with consent, accuracy, and basic respect for the people whose work and identities it draws upon.

For anyone watching the broader trajectory of AI regulation in 2026, this is exactly the kind of case that tends to move the needle — not because it is the first of its kind, but because the people involved are precisely those who know how to make the public pay attention.

The writers whose names were taken without permission have spent careers telling important stories. Now, one of them is telling this one.

Post a Comment