Musk has restricted the generation of sexualized images in Grok directly within X, limiting the feature to paid subscribers. At first glance, it looks like a response to the scandal: criticism was heard, and a safeguard was installed. But a closer look at the actual mechanics reveals this isn't protection-it's simply moving the function next door. And that door remains wide open, allowing the same destructive content to flow through, only now with a monetization price tag attached.
Why the Restriction Is Mostly Window Dressing
On X, the user experience has changed. When a user asks Grok to generate a sexualized image of a woman, the system returns a prompt: "Image generation limited to paying subscribers." To many, it seems the issue is closed. But this is where the "theater of reform" begins.
The standalone Grok app on iOS and Android continues to operate without such restrictions. The same goes for the web version at grok.com. While X has installed a localized blocker, generation remains accessible across other channels.
This is precisely why the UK Prime Minister039;s office described the move as turning illegal functionality into a premium service. Formally, it looks like regulation; actually, it is repackaging.
The scale of the problem cannot be dismissed as a rare anomaly. At its peak, the system was generating at least one sexualized image per minute. In a matter of days, thousands of women and girls saw their bodies converted into non-consensual pornography. The Internet Watch Foundation (IWF) has documented criminal material involving children aged 11-13 created via this tool and distributed on darknet forums. This is no longer just "toxic content." It is a zone where debates about convenience and free expression end, and criminal liability begins.
When the Generator Weaponizes Tragedies
The most illustrative-and disturbing-scene unfolded on January 7, 2026. ICE agent Jonathan Ross shot and killed 37-year-old Renee Nicole Good in Minneapolis. It was a tragedy that demanded investigation and accountability.
But the digital exploitation machine kicked in immediately.
While the body was still on the ground, a segment of X users began using Grok to generate sexualized deepfakes of Renee Nicole Good in a bikini. Simultaneously, others tried to unmask the undercover agent, asking the AI to generate his image.
Consequently, people with zero connection to the incident came under fire. An innocent man named Steve Grove became a target of mass harassment because the system produced an image that looked convincing but had no biometric link to the real person.
Hany Farid of UC Berkeley explained the phenomenon: AI enhancement hallucinates facial details, creating a sharp image that is biometrically completely incorrect. This is a critical distinction. Such content looks like truth to a mob, but it fails even the most basic level of identification.
The Regulatory Hammer Drops
What followed was a regulatory response moving faster than usual. Ofcom, the UK's communications regulator, set an urgent deadline. UK Prime Minister Keir Starmer called the events "revolting" and publicly signaled readiness to act. Investigations have been launched in the EU, India, Malaysia, and France. This is not "we'll look into it" rhetoric. This is a legal process with deadlines and real consequences.
UK regulators have the power to seek a court order to block X entirely within the country. Potential fines could reach 10% of the company's global revenue. For Musk, this is not an internet spat or a meme. It is a multi-billion dollar risk and the potential loss of an entire market.
A distinct reason for the regulators039; fury is the perception of prolonged silence. Clare McGlynn, an expert on the legal regulation of pornography and sexual violence at Durham University, stated bluntly that Musk failed to take responsible steps to prevent Grok from being used for malicious purposes. He denied the problem for months while evidence mounted.
There is also a damning detail regarding the timeline: critics estimate Musk knew about the issue as early as the summer of 2025. Clare McGlynn herself became a victim-a sexualized image of her was generated in June 2025. No response followed, time was lost, and isolated incidents spiraled into an epidemic.
The Internet Watch Foundation formulated the verdict in the harshest terms: as long as the generator continues to produce illegal material, restricting access in one location solves nothing. In practice, a paywall inside X changes nothing if parallel channels remain fully operational.
Our Verdict: Window Dressing, Not a Solution
That is why this story looks less like an attempt to patch a vulnerability and more like an attempt to reduce public noise while preserving functionality. The restriction on X works as window dressing, not as a brake on the machine. Meanwhile, the damage is already done, and it isn't limited to brand reputation. It lives on in the tens of thousands of real people whose images were turned into commodities and weapons-people who cannot simply press a button to delete and forget.
Musk didn't close the hole. He put it behind a paywall.