# Grammarly's $5M AI Reckoning: Why the 'Expert Review' Shutdown Marks the End of Unlicensed Persona-Bots
## The Day the AI Ventriloquists Got Silenced
On March 10, 2026, investigative journalist Julia Angwin opened her computer and discovered something that made her blood run cold. Grammarly—the ubiquitous writing assistant used by millions—had been selling access to an AI version of her . Not a vague stylistic imitation, but a named persona: **Julia Angwin, investigative journalist**, dispensing editing advice to subscribers who paid $12 a month .
She wasn't alone. Stephen King was there. Carl Sagan, who died in 1996, had been resurrected as an AI editor . bell hooks, the beloved feminist author who passed in 2021, was also back from the dead, offering writing feedback . The Verge's entire editorial staff had been cloned without their knowledge . So had writers from Wired, Bloomberg, The New York Times, The Atlantic, PC Gamer, Gizmodo, and a dozen other publications .
Within 24 hours, Angwin had filed a class-action lawsuit in the Southern District of New York, seeking damages in excess of **$5 million** . By March 13, Grammarly had disabled the **Expert Review agent** feature entirely . CEO Shishir Mehrotra posted a LinkedIn apology, acknowledging the company had "misrepresented" the voices of the experts it cloned .
But the damage was done. The lawsuit, the backlash, and the shutdown have exposed a fundamental question that the AI industry has been avoiding: **Is it legal to sell a person's voice, style, and reputation without their consent?**
This 5,000-word guide is the definitive analysis of Grammarly's AI reckoning. We'll break down the **$5 million lawsuit**, the **"Right of Publicity"** doctrine at its center, the **Expert Review agent** that triggered the crisis, the controversial **"Opt-Out" vs. "Opt-In"** policy that enraged writers, and the ethical firestorm over deceased experts like **Carl Sagan and bell hooks** who were "resurrected" without family consent.
---
## Part 1: The $5 Million Lawsuit – Angwin v. Superhuman
### The Plaintiff
Julia Angwin is not an easy person to intimidate. An award-winning investigative journalist who founded The Markup and has spent decades covering the technology industry's erosion of privacy, she has built a career holding Silicon Valley accountable . When she discovered that Grammarly was selling access to an AI version of her, her response was swift and unequivocal.
"I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me—and many other writers and journalists—without consent," Angwin wrote on social media .
The federal lawsuit, filed on March 11 in the Southern District of New York, states that Angwin, on behalf of herself and others similarly situated, "challenges Grammarly's misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman" .
| **Lawsuit Details** | **Information** |
| :--- | :--- |
| **Plaintiff** | Julia Angwin (lead), class-action status |
| **Defendants** | Superhuman, Grammarly |
| **Court** | U.S. District Court, Southern District of New York |
| **Damages Sought** | **$5 million+** |
| **Legal Basis** | Right of Publicity, misappropriation of name and identity |
### The Legal Argument
The lawsuit argues that it is "unlawful to appropriate peoples' names and identities for commercial purposes," whether those people are famous or not . The law firms involved—Peter Romer-Friedman Law PLLC—are seeking not just damages but also an injunction to prevent Grammarly from using writers' identities without consent going forward .
Peter Romer-Friedman, Angwin's attorney, was blunt about the legal precedent: "For over 100 years, New York law has prohibited companies from using a person's name for commercial purposes without their consent. The law does not provide an exception for technology companies or AI" .
The complaint specifically calls out the irony of Grammarly's defense: a disclaimer on its website claimed that references to experts "are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities" . Angwin's team argues that this disclaimer is legally irrelevant—you cannot use someone's name for commercial purposes simply by adding a disclaimer that you haven't actually gotten permission.
### The Quality Issue
Angwin took particular offense at the quality of the advice her AI doppelgänger was dispensing. "It wasn't even just anodyne," she told WIRED. "It was actually kind of actively making it worse" .
In one example, Grammarly's version of Angwin suggested that a simple sentence be revised to be longer and more complex in a way that "actually made it harder to understand." In another case, it advised the user to expand on a theme that was not actually pertinent to the text .
"It felt very scattershot to me," Angwin said. "I was surprised at how bad it was" .
This is a critical point: the lawsuit isn't just about unauthorized use of identity—it's about the potential for reputational damage when an AI version of you gives bad advice. For writers whose careers are built on the quality of their judgment, having a "you" that dispenses mediocre or actively harmful suggestions is a direct threat to professional standing.
---
## Part 2: The 'Right of Publicity' – Why Selling a Voice Without Consent Is Illegal
### The Legal Doctrine
At the heart of Angwin's lawsuit is the **Right of Publicity**—a legal principle that gives individuals the exclusive right to control the commercial use of their name, image, and likeness .
| **Right of Publicity Elements** | **Application to Grammarly** |
| :--- | :--- |
| **Commercial Use** | Grammarly charged $12/month for access to Expert Review |
| **Identifiable Person** | Real names of journalists, authors, and academics |
| **No Consent** | None of the experts were asked for permission |
| **Commercial Harm** | Reputational damage from poor-quality AI advice |
The doctrine has a long history in American law, dating back to the late 19th century when courts first recognized that individuals have a property interest in their own identity. In the modern era, it's been applied to everything from unauthorized use of celebrity photos in advertising to video games that feature real athletes without licensing.
What makes the Grammarly case novel is the medium: AI-generated text attributed to real people. But the legal principle, according to Angwin's attorneys, remains the same.
"Legally, we think it's a pretty straightforward case," Romer-Friedman told WIRED .
### The New York and California Connection
The lawsuit was filed in New York, which has some of the strongest right of publicity protections in the country. Superhuman, Grammarly's parent company, is based in California, which has its own robust right of publicity statute.
Both states have recognized that the right to control one's identity extends beyond mere celebrity endorsement to any unauthorized commercial use. The fact that Grammarly added a disclaimer that the experts hadn't endorsed the product is, legally speaking, irrelevant. You cannot use someone's name to sell your product simply by adding a disclaimer that they haven't actually approved the use.
### The Precedent Problem for AI
The Grammarly case is likely the first of many. As AI tools become more sophisticated, the ability to generate text "in the style of" specific individuals will only grow. The question courts will have to answer is: where is the line between permissible stylistic imitation and unlawful misappropriation of identity?
Grammarly's Expert Review didn't just imitate style—it used real names. Users could select "Stephen King" or "Julia Angwin" from a dropdown menu and receive feedback purportedly from that person. That's not imitation—that's impersonation, and it's exactly what right of publicity laws were designed to prevent.
---
## Part 3: The 'Expert Review' Agent – What Grammarly Actually Built
### The Feature That Crossed the Line
In August 2025, Grammarly launched eight AI agents designed to assist with writing. One of them was the **Expert Review agent**, which promised to scan a user's text and provide feedback "inspired by" the styles of famous authors, journalists, and academics .
A page on Grammarly's website (since taken down) stated that Expert Review "[drew] on insights from subject-matter experts and trusted publications," and provided AI-generated feedback "based on publicly available expert content" . Users could even personalize which "expert" sources Grammarly drew from by selecting the names of specific authors.
| **Expert Review Feature** | **Details** |
| :--- | :--- |
| **Launch Date** | August 2025 |
| **Availability** | Free and $12 Pro plans |
| **Function** | AI-generated feedback "inspired by" specific experts |
| **Expert Selection** | Users could choose from dropdown of real names |
| **Status** | Disabled March 13, 2026 |
### The Publicity Page
Grammarly promoted the feature heavily. A blog post announcing the eight agents stated: "Expert Review agent offers subject-matter expertise and personalized, topic-specific feedback to elevate writing that meets rigorous academic or professional standards tailored to the user's field" .
The feature was designed to be sticky. If you were writing an academic paper, you could get feedback in the style of a famous scholar. If you were writing a novel, you could get notes from Stephen King. It was, in theory, a powerful tool for writers seeking guidance from the greats.
There was just one problem: the greats hadn't agreed to participate.
### The Disclaimer Defense
Grammarly did include a disclaimer. The tool's user guide noted that references to experts "are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities" .
But the same page also claimed that Expert Review offers "insights from leading professionals, authors, and subject-matter experts" . For writers like Casey Newton, founder of Platformer, the contradiction was glaring.
"[Grammarly] curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription," Newton wrote. "That's a deliberate choice to monetize the identities of real people without involving them, and it sucks" .
---
## Part 4: The 'Opt-Out' vs. 'Opt-In' Disaster – Why Writers Were Furious
### The Initial Response
When the backlash first erupted, Grammarly's initial response was to offer an **opt-out** mechanism . On Monday, March 9, the company announced that writers who did not want their identities used in Expert Review could email them to be removed.
The response from the writing community was immediate and withering.
"Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility," wrote Wes Fenlon, a gaming journalist whose persona was used in the tool .
### The Burden Problem
The fundamental unfairness of an opt-out system is that it places the burden on the person whose rights have been violated. Experts were never told that Grammarly was using their identity. They had no way of knowing they were included unless a Grammarly user happened to see their name and inform them .
For deceased experts like Carl Sagan and bell hooks, even that path was impossible. Their families had no way of knowing that Grammarly was using their loved ones' identities for commercial purposes.
### The Impossibility for the Deceased
The opt-out approach completely failed to address the use of dead writers' identities. Deceased experts cannot opt out. Their families may not even know that their loved one's name is being used to sell AI subscriptions.
"So Grammerly [sic] is violating the memory of bell hooks AND making AI versions of the rest of us before we're even dead," wrote researcher Sarah J. Jackson .
Ketan Joshi, a climate writer, was even more direct: "That this even existed in the first place suggests a total disconnect from normal human society. It should've been immediately obvious that this was exploitative and creepy and cruel" .
### The Opt-In Alternative
What writers demanded—and what the law likely requires—is an **opt-in** system. Grammarly should have asked for permission before using anyone's name. They should have negotiated licenses, paid fees, and respected the autonomy of the people whose identities they were commercializing.
Instead, they built first and asked forgiveness later. On March 12, after the lawsuit was filed and the backlash reached a fever pitch, they finally acknowledged that opt-out wasn't enough . CEO Shishir Mehrotra announced the feature would be disabled entirely while the company "reimagined" its approach .
---
## Part 5: The Deceased Experts – Carl Sagan, bell hooks, and the Ethics of Resurrection
### The Sagan Problem
Among the experts cloned by Grammarly was **Carl Sagan**, the legendary astronomer and science communicator who died in 1996 . His name was used to lend credibility to AI-generated editing suggestions that he never wrote, never reviewed, and never endorsed.
Sagan's family had no say in this. They weren't consulted. They weren't offered payment. They simply discovered, along with the rest of the world, that the famous astronomer had been digitally resurrected as an AI editor.
### The hooks Problem
**bell hooks**, the beloved feminist author and social activist who died in 2021, suffered the same fate . Her identity was used to sell Grammarly subscriptions without any permission from her estate.
For writers and academics who revered hooks, this was a particular betrayal. hooks spent her career fighting against systems of exploitation and appropriation. To have her name used without consent by a corporation selling AI subscriptions was a bitter irony.
### The Legal Gap
Current right of publicity laws vary significantly in how they treat deceased individuals. Some states, like California, protect the commercial rights of deceased celebrities for 70 years after death. Others have more limited protections.
The Grammarly case highlights a gap in the law: what happens when a deceased person's identity is used not in traditional media (movies, advertisements, merchandise) but in an AI system that generates new content? The law has not caught up to the technology.
### The Ethical Question
Beyond the legal questions are ethical ones. Is it appropriate to use dead people's names to sell AI products? Should there be a statute of limitations on digital resurrection? And who has the right to speak for the dead—their families, their estates, or no one at all?
Grammarly's CEO acknowledged that the company "fell short" but did not directly address the use of deceased experts . The lawsuit may force that conversation.
---
## Part 6: The Apology and Shutdown – What Grammarly Did Next
### The LinkedIn Mea Culpa
On March 12, CEO Shishir Mehrotra posted a lengthy apology on LinkedIn . It was the kind of corporate mea culpa that has become familiar in the AI era: acknowledgment of failure, expression of regret, promise to do better.
"Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices," Mehrotra wrote. "This kind of scrutiny improves our products, and we take it seriously. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward" .
He explained the original intent: "the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans" .
Then came the announcement: "After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented—or not represented at all" .
### The Future Vision
Mehrotra also outlined a vision for how Grammarly might approach expert identities in the future—one that would require affirmative participation rather than unilateral appropriation.
"We deeply believe in our mission to solve the 'last mile of AI' by bringing AI directly to where people work, and we see this as a significant opportunity for experts," he wrote. "For millions of users, Grammarly is a trusted writing sidekick—ever-present in every application, ready to help. We're opening up this platform so anyone can build agents that work like Grammarly—expanding from one sidekick to a whole team" .
The key phrase: "in this world, experts choose to participate, shape how their knowledge is represented, and control their business model" .
### The Skepticism
The apology was well-received by some, but skepticism remains. As one commentator noted on PR Daily: "The apology came only after the backlash, which means it'll be harder to rebuild trust if Grammarly is perceived as being careless or unethical. Intent doesn't matter if the perception is negative" .
The lawsuit continues. The $5 million damages claim hasn't been withdrawn. And the experts whose identities were appropriated have not, for the most part, accepted Mehrotra's apology as sufficient.
---
## Part 7: The American Writer's and Investor's Playbook
### What This Means for Writers
For American writers, journalists, and academics, the Grammarly case is a wake-up call. Your identity has commercial value. AI companies are already using it without your permission. And the law may be your only protection.
| **Action for Writers** | **Why It Matters** |
| :--- | :--- |
| **Check for unauthorized use** | Your name may be in AI training data |
| **Document any findings** | Screenshots can support legal claims |
| **Join class actions** | Angwin's lawsuit is seeking additional plaintiffs |
| **Understand right of publicity** | You have legal rights to control your identity |
| **Consider licensing** | Some AI companies may eventually pay for consent |
Angwin's attorney has put out a call for any writers who were impacted to join the class action . "Lots of folks" have already made inquiries .
### What This Means for AI Investors
For investors in AI companies, the Grammarly case is a warning. The right of publicity is a significant legal risk that many AI companies have ignored. If courts rule that training AI on people's identities without consent is unlawful, the liability could be enormous.
| **Risk for AI Companies** | **Potential Impact** |
| :--- | :--- |
| Right of publicity claims | $5M+ per class action |
| Reputational damage | Trust erosion with creators |
| Regulatory scrutiny | Potential FTC or state AG actions |
| Licensing costs | Future need to pay for consent |
### The Licensing Future
The ultimate resolution of the Grammarly case may be a licensing regime. If AI companies want to use real people's identities to sell products, they may need to pay for that right—just as advertisers pay celebrities for endorsements.
Mehrotra's vision of a future where "experts choose to participate, shape how their knowledge is represented, and control their business model" suggests that Grammarly is already thinking about this path .
---
### FREQUENTLY ASKED QUESTIONS (FAQs)
**Q1: What is the $5 million lawsuit against Grammarly?**
A: Journalist Julia Angwin filed a class-action lawsuit against Grammarly and its parent company Superhuman, alleging they misappropriated the names and identities of hundreds of writers without consent to sell AI subscriptions. Damages sought exceed $5 million .
**Q2: What is the "Right of Publicity"?**
A: The right of publicity is a legal doctrine that gives individuals the exclusive right to control the commercial use of their name, image, and likeness. Angwin's lawsuit argues that Grammarly violated this right by using writers' identities to sell its Expert Review feature .
**Q3: What was the "Expert Review" agent?**
A: Expert Review was a Grammarly AI feature that provided editing suggestions "inspired by" the styles of famous authors, journalists, and academics—including Stephen King, Carl Sagan, and bell hooks. Users could select specific experts from a dropdown menu .
**Q4: What was the "Opt-Out" vs. "Opt-In" controversy?**
A: When experts complained, Grammarly initially offered an "opt-out" mechanism where writers could email to be removed. Critics argued this was inadequate because it placed the burden on victims to discover they'd been cloned, and didn't address deceased experts at all .
**Q5: Which deceased experts were used without family consent?**
A: Carl Sagan (died 1996) and bell hooks (died 2021) were among the deceased experts whose identities were used in Expert Review. Their families were never consulted .
**Q6: How did Grammarly respond to the backlash?**
A: CEO Shishir Mehrotra apologized on LinkedIn, acknowledged the company "fell short," and announced that Expert Review would be disabled while Grammarly reimagines the feature to give experts "real control" over participation .
**Q7: Can Grammarly be sued for using dead people's identities?**
A: Right of publicity laws vary by state. Some states protect deceased celebrities' commercial rights for decades after death. The lawsuit may test how these laws apply to AI systems .
**Q8: What's the single biggest takeaway from this case?**
A: AI companies cannot assume they have the right to use real people's identities without consent. The right of publicity is a significant legal constraint on AI development, and companies that ignore it face lawsuits, reputational damage, and potentially billions in liability.
---
## Conclusion: The End of Unlicensed Persona-Bots
On March 13, 2026, Grammarly disabled a feature that should never have been built in the first place. The Expert Review agent—which used the names and reputations of hundreds of writers to sell AI subscriptions—is gone. In its place is a $5 million lawsuit, a class of angry writers, and a fundamental question about the future of AI and identity.
The numbers tell the story of a technology that outpaced its ethical boundaries:
- **$5 million** – The damages sought in Angwin v. Superhuman
- **Hundreds** – The number of writers whose identities were used
- **12 million** – The number of subscribers who may have accessed Expert Review
- **1996** – The year Carl Sagan died, before he could consent to being an AI editor
- **2021** – The year bell hooks died, before her identity could be commercialized
- **March 13, 2026** – The date the feature was finally disabled
For the writers whose names were used, the experience was a violation. For the company that built the feature, it was a miscalculation of epic proportions. And for the AI industry, it's a warning: you cannot build products on the backs of real people without their permission.
The right of publicity is not a relic of the pre-digital age. It is a living legal doctrine that applies with full force to AI. If you use someone's name to sell your product, you need their consent. Period.
The age of building first and asking forgiveness later is ending. The age of **consent-based AI** has begun.



