AI Girlfriend Apps Leak Private Chats in 17 Services
Researchers have identified critical security vulnerabilities affecting 17 AI companion and “AI girlfriend” apps on Google Play, potentially exposing private conversations among 150 million users. An audit uncovered 14 critical and 311 high-severity security issues across the services, with 10 of the 17 apps containing exploitable routes to stored chat data and six containing flaws that could grant direct access to conversation histories.

Security researchers conducting an audit of AI companion applications discovered multiple classes of vulnerabilities that could expose intimate user conversations. According to the findings, one app with more than 10 million installs contained hardcoded cloud credentials — including an OpenAI token and a Google Cloud private key — directly embedded in the Android application package, allowing attackers with basic reverse-engineering skills to extract authentication materials.

Another high-traffic app with over 10 million downloads contained a cross-site scripting flaw in its chat interface that could permit malicious code injection, enabling attackers to read displayed messages, steal session tokens, and insert fabricated messages into conversations. A separate app known for adult content exhibited a file theft vulnerability allowing extraction of internal storage files, including local chat databases, cached media, and login credentials.

One service with more than 50 million installs was found vulnerable through its advertising software development kit, where a malicious could exploit the flaw to launch internal components and query database tables containing user conversations—creating a supply-chain risk through ad delivery mechanisms.

These apps store highly sensitive personal information beyond typical messaging platforms. Exposed data could include explicit sexual exchanges, discussions of extramarital affairs, suicidal ideation, sexual orientation disclosures, and accounts of domestic conflict. Many affected services cache chats, photos, voice messages, and authentication data on devices, creating multiple exposure points when security is inadequate.

According to Sergey Toshin, founder of Oversecured, the category expanded rapidly without implementing foundational security measures. The AI companion category handles a different but equally sensitive type of data as therapy apps—personal confessions, relationship details, sexual content. These apps grew so fast that basic security was never part of the process, Toshin stated.

AI companion applications currently occupy a regulatory blind spot. Unlike healthcare products, these services face minimal oversight despite collecting disclosures resembling therapy session records. Existing regulations have focused primarily on child safety and suicide prevention rather than conversation security architecture. This distinction is critical because attackers obtaining cloud keys, session tokens, or local file access can retrieve extensive archives of intensely personal exchanges rather than isolated conversation threads.

The findings are likely to increase regulatory pressure on AI companion developers to audit credential storage, web content handling, advertising integration, and local file protection mechanisms. Previous security incidents involving AI companion platforms have exposed tens of millions of messages and user photos through misconfigured servers, suggesting systemic risks persist across the category.

Follow Hashlytics on Bluesky, LinkedIn , Telegram and X to Get Instant Updates