Visualização de leitura

Irish regulator probes X after Grok allegedly generated sexual images of children

Ireland’s Data Protection Commission opened a probe into X over Grok AI tool allegedly generating sexual images, including of children.

Ireland’s Data Protection Commission has launched another investigation into X over Grok’s AI image generator. The probe focuses on reports that the tool created large volumes of non-consensual and sexualized images, including content involving children, potentially violating EU data protection laws.

“The Data Protection Commission (DPC) has today announced that it has opened an inquiry into X Internet Unlimited Company (XIUC) under section 110 of the Data Protection Act 2018.” reads the Ireland’s DPC’s press release. “The inquiry concerns the apparent creation, and publication on the X platform, of potentially harmful, non-consensual intimate and/or sexualised images, containing or otherwise involving the processing of personal data of EU/EEA data subjects, including children, using generative artificial intelligence functionality associated with the Grok large language model within the X platform.”

In January, X’s safety team blocked the @Grok account from editing images of real people to add revealing clothing, such as bikinis, for all users. Image creation and editing features now remain available only to paid subscribers, adding an accountability layer to deter abuse and policy violations.

“We have implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.” reads the X announcement. “Image creation and the ability to edit images via the [@]Grok account on X are now only available to paid subscribers globally. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the [@]Grok account to violate the law or our policies can be held accountable.”

Ireland’s Data Protection Commission’s probe will assess whether X breached key GDPR provisions on lawful data processing, privacy by design, and impact assessments. As X’s lead EU regulator, the DPC said it had already engaged with the company and will now conduct a large-scale investigation into its compliance with fundamental data protection obligations.

“The decision to commence the inquiry was notified to XIUC on Monday 16 February.” Ireland’s DPC continues. “The purpose of the inquiry is to determine whether XIUC has complied with its obligations under the GDPR, including its obligations under Article 5 (principles of processing), Article 6 (lawfulness of processing), Article 25 (Data Protection by Design and by Default) and Article 35 (requirement to carry out a Data Protection Impact Assessment) with regard to the personal data processed of EU/EEA data subjects.”

The Irish DPC joins a growing list of regulators investigating X, including the European Commission, the UK’s ICO and Ofcom, and authorities in Australia, Canada, India, Indonesia, and Malaysia. France has also been conducting a broad investigation since January, expanding its scope as new concerns arise.

“The DPC has been engaging with XIUC since media reports first emerged a number of weeks ago concerning the alleged ability of X users to prompt the @Grok account on X to generate sexualised images of real people, including children. As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry which will examine XIUC’s compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand.” said Deputy Commissioner Graham Doyle.

An interesting report published by the nonprofit watch group Center for Countering Digital Hate (CCDH) estimates that Grok generated around 3 million sexualized images in just 11 days after X launched its image-editing feature, an average of about 190 per minute. Among them, roughly 23,000 appeared to depict children, or one every 41 seconds, plus another 9,900 cartoon sexualized images of minors. Researchers found that 29% of identified child images remained publicly accessible, highlighting the scale and speed of the content spread.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Grok)

Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children

In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy. 

While some survivors of Epstein’s abuse have chosen to identify themselves, many more have never come forward. In a joint statement, 18 of the survivors condemned the release of the files, which they said exposed the names and identifying information of survivors “while the men who abused us remain hidden and protected”. 

After the latest release of documents on Jan. 30 under the Epstein Files Transparency Act, thousands of documents had to be taken down because of flawed redactions that lawyers for the victims said compromised the names and faces of nearly 100 survivors. 

But X users are trying to undo the redactions on even the images of people whose faces were correctly redacted. By searching for terms such as “unblur” and “epstein” with the “@grok” handle, Bellingcat found more than 20 different photos and one video that multiple users were trying to unredact using Grok. These included photos showing the visible bodies of children or young women, with their faces covered by black boxes. There may be other such requests on the platform that were not picked up in our searches.

Requests by X users for Grok to unblur and identify the images of children from the Epstein files, overlaid on an image of Epstein next to a young child in a pool. Source: X; collage by Bellingcat

The images appeared to show several children and women with Jeffrey Epstein as well as other high-profile figures implicated in the files, including the UK’s Prince Andrew, former US President Bill Clinton, Microsoft co-founder Bill Gates and director Brett Ratner, in various locations such as inside a plane and at a swimming pool.

From Jan. 30 to Feb. 5, we reviewed 31 separate requests from users for Grok to “unblur” or identify the women and children from these images. Grok noted in responses to questions or requests by some users that the faces of minors in the files were blurred to protect their privacy “as per standard practices in sensitive images from the Epstein files”, and said it could not unblur or identify them. However, it still generated images in response to 27 of the requests that we reviewed. 

We are not linking to these posts to prevent amplification.

The generations created by Grok ranged in quality from believable to comically bad, such as a baby’s face on a young girl’s body. Some of these posts have garnered millions of views on X, where users are monetarily incentivised to create high-engagement content.

Examples of posts by X users asking Grok to unredact images from the latest Epstein release, some with millions of views. Source: X

Of the four requests we found during this period that Grok did not generate images in response to, it did not respond to one request at all. In response to another request, Grok said deblurring or editing images was outside its abilities, and noted that photos from recent Epstein file releases were redacted for privacy. 

The other two requests appeared to have been made by non-premium users, with the chatbot responding: “Image generation and editing are currently limited to verified Premium subscribers”. X has limited some of Grok’s image generation capabilities to paid subscribers since January amid an ongoing controversy over users using the AI chatbot to digitally “undress” women and children. 

X did not respond to multiple requests for comment. 

However, shortly after we first reached out to X on Feb. 6, we noticed that more guardrails appeared to have been put in place. Out of 16 requests from users between Feb. 7 to Feb. 9, which we found using similar search terms as before, Grok did not attempt to unredact any of the images. 

In most cases, Grok did not respond at all (14), while in two cases, Grok generated AI images that were completely different from the images uploaded in the user’s original request. 

When a user commented on one of these requests that Grok was no longer working, Grok responded: “I’m still operational! Regarding the request to unblur the face in that Epstein photo: It’s from recently released DOJ files where identities of minors are redacted for privacy. I can’t unblur or identify them, as it’s ethically and legally protected. For more, check official sources like the DOJ releases.”

As of publication, X had not responded to Bellingcat’s subsequent query about whether new guardrails had been put in place over the weekend.

Fabricated Images

This is not the first time AI has been used to fabricate images related to Epstein file releases. Some images that were shared on X, which appeared to show Epstein alongside famous figures such as US President Donald Trump and New York City mayor Zohran Mamdani as a child with his mother, were reportedly AI-generated. Some of the individuals shown in the false images, such as Trump, do appear in authentic photos, which can be viewed on the DOJ website.

Far left: AI-generated photo of Trump and Epstein with several children. Middle and far right: AI-generated photos of a young Mamdani and his mother, alongside Epstein, former US president Bill Clinton, Amazon CEO Jeff Bezos, Microsoft co-founder Bill Gates and Epstein associate Ghislaine Maxwell. Source: X. Annotations by Bellingcat

X users also previously used Grok to generate images in relation to recent killings in Minnesota by federal agents. 

For example, some users asked Grok to try to “unmask” the federal agent who killed Renee Good, resulting in a completely fabricated face of a man that did not look like the actual agent, Jonathan Ross, and a false accusation of a man who had nothing to do with the shooting.

Bellingcat’s Director of Research and Training @giancarlofiorella.bsky.social appeared on CTV yesterday to discuss the misleading AI-generated images that were used to falsely identify ICE agents and weapons at the centre of the two fatal shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…

[image or embed]

— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36

After Alex Pretti was shot and killed by federal agents in Minneapolis, people used AI to edit video stills, resulting in AI images that showed a completely different gun than the one actually owned by Pretti. In another instance, an AI-edited image of Pretti’s shooting falsely depicted the intensive care unit nurse holding a gun instead of his sunglasses. 

Grok has also been at the centre of a controversy for generating sexually explicit content.

On Twitter/X, users have figured out prompts to get Grok (their built in AI) to generate images of women in bikinis, lingerie, and the like. What an absolute oversight, yet totally expected from a platform like Twitter/X. I’ve tried to blur a few examples of it below.

[image or embed]

— Kolina Koltai (@koltai.bsky.social) 6 May 2025 at 03:20

Multiple countries including the UK and France have launched investigations into Elon Musk’s chatbot over reports of people using it to generate deepfake non-consensual sexual images, including child sexual abuse imagery. Malaysia and Indonesia have also blocked Grok over concerns about deepfake pornographic content. 

One analysis by the Center for Countering Digital Hate found that Grok had publicly generated around three million sexualised images, including 23,000 of children, in 11 days from Dec. 29, 2025 to Jan. 8 this year. X’s initial response, in January, was to limit some image generation and editing features to only paid subscribers. However, this has been widely criticised as inadequate, including by UK Prime Minister Keir Starmer, who said it “simply turns an AI feature that allows the creation of unlawful images into a premium service”. The social media platform has since announced new measures to block all users, including paid subscribers, from using Grok via X to edit images of real people in revealing clothing such as bikinis.


Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.

The post Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children appeared first on bellingcat.

Paris raid on X focuses on child abuse material allegations

French prosecutors raided X offices in Paris over illegal content; Elon Musk and CEO summoned for voluntary interviews in April.

French prosecutors, with France’s National Gendarmerie and Europol support, raided the X offices in Paris in a criminal probe over complaints that the platform facilitated child sexual abuse material and other illegal content.

The probe began in January after complaints accused X of aiding possession and distribution of child sexual abuse material.

Elon Musk and CEO Linda Yaccarino have been summoned for voluntary interviews in Paris in April as part of the investigation. Musk claims that the investigation is a “political attack”.

This is a political attack https://t.co/Z204wJuQIr

— Elon Musk (@elonmusk) February 3, 2026

Lawmakers also reported sexually explicit deepfakes featuring minors generated by X’s AI, Grok. The inquiry covers illegal content, denial of crimes against humanity, unauthorized data extraction, and operating an illicit online platform.

Prosecutors said company employees must testify as witnesses, but no charges have been filed.

X says French authorities raided its Paris office in a politicized probe, targeting employees unfairly. The company denies wrongdoing, calls the raid baseless, and vows to defend its rights and users.

“French judicial authorities raided X’s Paris office today in connection with a politicized criminal investigation into alleged manipulation of algorithms and purported fraudulent data extraction. We are disappointed by this development, but we are not surprised.” the company wrote on X. “The Paris Public Prosecutor’s office widely publicized the raid—making clear that today’s action was an abusive act of law enforcement theater designed to achieve illegitimate political objectives rather than advance legitimate law enforcement goals rooted in the fair and impartial administration of justice.”

French judicial authorities raided X’s Paris office today in connection with a politicized criminal investigation into alleged manipulation of algorithms and purported fraudulent data extraction. We are disappointed by this development, but we are not surprised. The Paris Public…

— Global Government Affairs (@GlobalAffairs) February 3, 2026

UK authorities are also investigating sexual deepfakes made by Grok on X. Ofcom calls it urgent but lacks powers over chatbots, while the ICO probes personal data misuse. The EU also examines xAI, coordinating with France after X’s Paris office raid.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, X)

The AI Fix #84: A hungry ghost trapped in a jar gains access to the Pentagon’s network

In episode 84 of The AI Fix, Graham and Mark stare straight into the digital abyss and ask the most important question of our age: "Is AI just a hungry ghost trapped in a jar?" Also this week, we explore how a shadowy group of disgruntled insiders trying to destroy AI by poisoning its training data, how "vibe-coding" has stopped being a joke with even Linus Torvalds joining in, how Google’s AI health advice could have endangered lives, and why simply asking an AI the same question twice can turn it from clueless to near-omniscient. Oh, and AI has managed to crack some famously unsolved maths problems in minutes, and Grok gains access to all of the Pentagon's networks? What could possibly go wrong? All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.

Smashing Security podcast #450: From Instagram panic to Grok gone wild

Confusion reigns after claims that data linked to 17.5 million Instagram accounts is up for sale - sparked by a vague post, contradictory statements, and a flood of password reset emails nobody asked for. And we dig into Grok, Elon Musk’s AI chatbot, after it started generating sexualised images of women and children - raising uncomfortable questions about guardrails, accountability, and why playing the censorship card doesn’t make the problem go away. All this, and much more, in episode 450 of the "Smashing Security" podcast with Graham Cluley, and special guest Monica Verma.

Grok apologizes for creating image of young girls in “sexualized attire”

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Smashing Security podcast #447: Grok the stalker, the Louvre heist, and Microsoft 365 mayhem

On this week's show we learn that AI really can be a stalker’s best friend, as we explore a strange tale that starts with a manatee-shaped mailbox on a millionaire's lawn and ends with Grok happily doxxing real people, mapping out stalking "strategies," and handing out revenge-porn tips. Then we go inside the Louvre heist, where thieves in hi-vis and a hire van waltzed off with the French crown jewels in broad daylight, exploiting our assumptions about what "looks normal" - the same kind of bias we’re now baking into security AIs. Plus, Graham chats with Rob Edmondson from CoreView about why misconfigurations and over-privileged accounts can make Microsoft 365 dangerously vulnerable. All this, and more, in episode 447 of the "Smashing Security" podcast with Graham Cluley, and special guest Jenny Radcliffe.
❌