France probes AI 'undressing' deepfakes circulating on X
The Paris Prosecutor’s Office has launched an investigation into the proliferation of AI-generated ‘undressing’ deepfakes circulating on the social media platform X, with a particular focus on content featuring minors. The probe, confirmed on , stems from reports regarding the misuse of Grok, X’s artificial intelligence tool, to create non-consensual intimate images. This development integrates into an existing French investigation concerning X’s alleged failures in combating scams and foreign interference.

French lawmakers Arthur Delaporte and Eric Bothorel contacted the Paris Prosecutor’s Office to report the widespread dissemination of sexually explicit deepfakes, including those targeting minors, reportedly generated by Grok, xAI’s artificial intelligence. These complaints highlighted the ease with which users could prompt Grok to digitally alter images, “removing clothes” from women and teenagers. According to Recorded Future News, the incident that provoked widespread outrage this week involved Grok responding to a user’s prompt to undress an image of a 14-year-old actress. X’s AI moderation policies, outlined in xAI’s Acceptable Use guidelines, explicitly prohibit the sexualization of individuals and the depiction of minors in pornographic contexts; however, recent events suggest these safeguards have been inadequate.

The investigation in France underscores a growing international concern regarding the malicious use of generative AI for creating non-consensual intimate images (NCII). A study examining 29 AI ‘undressing’ apps highlighted how these platforms enable unskilled users to generate such images rapidly and cheaply, often from a single photograph. The prevalence of deepfakes is significant; Jumio reported that 60% of consumers encountered a deepfake video within the last year. Among adults who have seen deepfake content, 14% indicated they had encountered a sexual deepfake. A survey commissioned by the office of the UK police chief scientific adviser found that 7% of respondents had been depicted in a sexual or intimate deepfake.

In response to the escalating threat, governments are taking action. The British government, for instance, has announced plans to ban “nudification tools” in all forms, including AI models, with potential criminal penalties for individuals or companies involved in their design or supply. France has already legislated against such content with the SREN Law, adopted on , which criminalizes the production and dissemination of non-consensual deepfakes. This law can impose penalties of up to two years imprisonment and a €60,000 fine for online distribution of sexual content, with even stricter penalties of up to three years in prison and a €75,000 fine for sexual deepfakes.

The current situation highlights the challenges social media platforms face in controlling AI tools with powerful media manipulation capabilities. Cybersecurity expert Ritesh Bhatia emphasized the responsibility of platforms in such incidents, stating, When a platform like Grok even allows such prompts to be executed, the responsibility squarely lies with the intermediary. Technology is not neutral when it follows harmful commands. If a system can be instructed to violate dignity, the failure is not human behavior alone—it is design, governance, and ethical neglect. Creators of Grok need to take…. This surge in AI-generated deepfakes demonstrates that despite internal policies, safeguards within AI tools can be circumvented, leading to their misuse for creating illegal and harmful content.

The specific number of individuals, particularly minors, affected by the Grok-generated deepfakes in France remains unconfirmed. The precise timeline for the conclusion of the ongoing investigation by the Paris Prosecutor’s Office has not been publicly disclosed. Additionally, it is unclear what specific penalties, beyond those already part of the existing foreign interference probe, X may face directly related to the deepfake incidents involving Grok.

The investigation by French authorities is expected to continue, potentially leading to further regulatory actions against X or its AI arm, xAI. This incident is part of a broader European scrutiny of X, with regulators weighing actions against the platform following previous concerns and fines related to EU laws. The UK’s planned ban on nudification tools signals a growing legislative trend to directly address AI-generated intimate abuse. Globally, governments and regulatory bodies are grappling with the need for more robust regulation to prevent AI deepfakes from escalating into widespread harassment and exploitation.

Users concerned about deepfakes and online safety should take several proactive steps:

  1. Be Skeptical of Unverified Content: Always question the authenticity of images and videos, especially those that seem unusual or too good/bad to be true.
  2. Report Suspicious Content: If you encounter non-consensual intimate images or deepfakes, report them immediately to the platform where they are hosted and, if appropriate, to law enforcement.
  3. Review Privacy Settings: Regularly check and strengthen privacy settings on social media accounts to limit who can access your images and personal information.
  4. Educate Yourself: Stay informed about the capabilities of AI deepfake technology and common methods of detection to better identify manipulated content.

Follow us on Bluesky , LinkedIn , and X to Get Instant Updates