Home AI Tools and Trends

Google Axes Gemma AI After Senator’s Defamation Claim

Google Axes Gemma AI After Senator's Defamation Claim
Google withdraws Gemma model from AI Studio after U.S. Senator Marsha Blackburn reports fabricated sexual misconduct accusations, raising questions about AI defamation and hallucination mitigation.

Google’s Response to Fabricated Claims

Google says it has removed Gemma from its AI Studio after U.S. Senator Marsha Blackburn accused the AI model of fabricating accusations of sexual misconduct against her, escalating concerns about AI hallucinations crossing into defamation territory.

The Senator’s Allegations

In a letter to Google CEO Sundar Pichai, Senator Marsha Blackburn, a Republican from Tennessee, reported that when Gemma was asked, “Has Marsha Blackburn been accused of rape?” it responded with fabricated claims about a 1987 state senate campaign where a state trooper allegedly accused her of pressuring him to obtain prescription drugs and involvement in non-consensual acts.

Complete Fabrication

“None of this is true, not even the campaign year which was actually 1998,” Blackburn wrote. While the AI response included links to news articles supposedly supporting these claims, “the links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories.”

Pattern of Defamatory AI Output

The letter also referenced a recent Senate Commerce hearing where Blackburn brought up conservative activist Rob Starbuck’s lawsuit against Google, in which Starbuck claims Google‘s AI models (including Gemma) generated defamatory claims about him being a “child rapist” and “serial sexual abuser.”

Google’s Initial Response

According to Blackburn’s letter, Google’s Vice President for Government Affairs and Public Policy Markham Erickson responded during the hearing that hallucinations are a known issue and Google is “working hard to mitigate them.”

Defamation vs. Hallucination Debate

Blackburn’s letter argued that Gemma’s fabrications are “not a harmless ‘hallucination,'” but rather “an act of defamation produced and distributed by a Google-owned AI model,” challenging the industry’s framing of false AI outputs as technical glitches rather than potentially actionable harm.

Political Context

President Donald Trump’s tech industry supporters have complained that “AI censorship” causes popular chatbots to show liberal bias, and Trump signed an executive order banning “woke AI” earlier this year.

Blackburn, who helped strip a moratorium on state-level AI regulation from Trump’s “Big Beautiful Bill”, echoed those complaints in her letter, writing that there’s “a consistent pattern of bias against conservative figures demonstrated by Google’s AI systems.”

Google’s Friday Night Statement

In a Friday night post on X, Google did not reference the specifics of Blackburn’s letter, but the company said it’s “seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.”

Intended Use Defense

“We never intended this to be a consumer tool or model, or to be used this way,” the company stated. Google promotes Gemma as a family of open, lightweight models that developers can integrate into their own products, while AI Studio is the company’s web-based development environment for AI-powered apps.

Removal from AI Studio

As a result, Google said it’s removing Gemma from AI Studio while continuing to make the models available via API for developer use.

Broader Implications

Legal Questions

The incident raises significant questions about liability for AI-generated defamatory content:

  • Publisher Liability: Can AI companies be held responsible for defamatory outputs?
  • Section 230 Protection: Do traditional internet liability shields apply to AI-generated content?
  • Malice Standard: How do defamation standards apply when content is algorithmically generated?
  • Remedies: What recourse exists for individuals harmed by AI fabrications?

Technical Challenges

The case highlights fundamental AI limitations:

  • Hallucination Persistence: Leading models still generate false information confidently
  • Factual Accuracy: AI systems struggle to distinguish truth from plausible-sounding fabrications
  • Citation Verification: Models generate fake sources to support false claims
  • Bias and Consistency: Questions about whether errors occur systematically against certain groups

Policy Implications

  • AI Regulation: State vs. federal oversight of AI systems
  • Transparency Requirements: Disclosure of model limitations and intended uses
  • Accountability Standards: Legal frameworks for AI-generated harm
  • Testing Requirements: Pre-deployment validation of factual accuracy

Google’s decision to remove Gemma from AI Studio while maintaining API access suggests the company is attempting to limit direct consumer interaction with models prone to hallucination while preserving developer access. This approach raises questions about whether restricting access addresses the underlying technical issues or merely shifts potential liability.

Google’s removal of Gemma from AI Studio following Senator Blackburn’s complaint about fabricated sexual misconduct allegations marks a significant moment in the evolving debate over AI hallucinations, defamation, and accountability. The incident demonstrates that AI-generated false information can have real-world consequences for individuals’ reputations, moving the conversation beyond abstract discussions of “hallucination mitigation” to concrete questions of legal liability and harm.

Key takeaways from the controversy:

  • Hallucinations remain unresolved: Leading AI models still generate confident false claims with fabricated sources
  • Legal frameworks unclear: Defamation law’s application to algorithmic content generation is untested
  • Access restrictions as band-aid: Removing consumer access doesn’t address underlying technical issues
  • Political dimension: Allegations of bias complicate technical discussions with ideological concerns
  • Accountability gap: No clear mechanism exists for individuals harmed by AI fabrications

As AI systems become more widely deployed and accessible, the tension between their limitations and their potential for harm will likely intensify. Whether Google’s response, restricting access rather than fundamentally solving the hallucination problem, proves sufficient remains to be seen, particularly as legal challenges like Starbuck’s lawsuit work through the courts.

The incident underscores that AI companies can no longer dismiss harmful outputs as mere technical glitches. When AI systems make specific, damaging false claims about real people with fabricated evidence, the gap between “hallucination” and “defamation” becomes difficult to defend.

Key Facts:

  • Google removed Gemma from AI Studio following Senator Blackburn’s complaint
  • Gemma fabricated sexual misconduct allegations with fake sources and wrong dates
  • Similar lawsuit filed by conservative activist Rob Starbuck claiming defamatory AI output
  • Gemma remains available via API for developers
  • Google maintains Gemma was never intended as consumer tool

Parties Involved:

  • Senator Marsha Blackburn (R-Tennessee)
  • Google CEO Sundar Pichai
  • Google VP Markham Erickson
  • Conservative activist Rob Starbuck (separate lawsuit)

LEAVE A REPLY

Please enter your comment!
Please enter your name here