First, Grok would do the “edit.” Then came the outcry. Now comes the new gatekeeper: your credit card.
Elon Musk’s platform X has quietly reshaped access to Grok’s image editing features after widespread complaints that users were able to generate sexualised deepfakes, including edits that appeared to “undress” real people without consent. The twist is not a full shutdown. It is a restriction that changes who can do it on X, and what information X can potentially tie to the request.
The move lands as the UK government turns up the heat and points to Ofcom’s enforcement toolkit under the Online Safety Act, including measures that could threaten access to the platform in the UK.
A paywall that also acts like a name tag
According to reporting by BBC News, X has limited Grok image generation and editing on the platform to paying users. In practical terms, people who try to prompt Grok to generate or edit images via X are being told the features are “currently limited to paying subscribers,” with a prompt to subscribe to unlock them.
That is a meaningful shift for two reasons.
First, it shrinks the pool of people who can request image edits directly inside X’s main product, where the tool can be invoked in replies and posts. Second, it pushes would-be users into an account tier that typically involves a paid subscription and a payment method on file.
Critics see that as less of a safety fix and more of an access control strategy. The BBC noted that non-subscribers can still use Grok to edit images through its separate app and website, which means the capability itself has not disappeared. It has been redirected.
What sparked the crackdown: “undressing” prompts and non-consensual edits
The controversy centers on Grok responding to user prompts that asked the chatbot to alter images of real people in sexualised ways. BBC News reported that Grok honored requests to digitally alter images of other people by undressing them without their consent. Women targeted by such edits told the BBC they felt “humiliated” and “dehumanised.”
This is not just about adult harassment. The Internet Watch Foundation, a UK-based charity focused on combating child sexual abuse imagery online, previously said its analysts found “criminal imagery” of girls aged 11 to 13 that “appeared to have been created” using Grok, according to the BBC.
Elon Musk has revealed that his AI startup, xAI, will be bringing more computers online to meet the growing user demand for its chatbot Grok. The announcement comes even as Grok has been under the scanner for generating sexually suggestive photos – deepfakes – of women with a… pic.twitter.com/ikZ263DPT3
— IndiaToday (@IndiaToday) January 9, 2026
Those two facts, non-consensual sexualised deepfakes involving adults and alleged criminal imagery involving children, are why this story moved quickly from platform drama to regulatory threat.
X has been approached for comment by the BBC. At the time of the BBC report, no response was included.
Keir Starmer puts it in legal terms, not culture-war terms
The UK government’s posture has been unusually direct. BBC News reported that Prime Minister Sir Keir Starmer called the situation “disgraceful” and “disgusting,” and framed the matter as plainly unlawful content, not a messy debate about satire or offensive speech.
In an interview cited by the BBC, Starmer said: “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table.”
Government sources also told BBC News they expected Ofcom to use all powers at its disposal regarding Grok and X. The message was clear: the government wants the regulator to act, and it wants that action to be felt.
Ofcom’s leverage is bigger than a fine
Under the UK’s Online Safety Act, Ofcom can compel platforms to tackle illegal content and, in certain circumstances, take sweeping steps if a company does not comply. BBC News noted that Ofcom’s powers include seeking a court order that can prevent third parties from helping a platform raise money or be accessed in the UK.
That matters in an age where platforms do not only rely on ad revenue. Subscription products, payment processors, app stores, and hosting or access pathways can all become pressure points in a compliance fight.
BBC News reported that the government urged Ofcom to use all its powers, including the possibility of an effective ban, over concerns about unlawful AI images on X. Ofcom has been approached for comment, the BBC said.
What the new limit does, and what it does not
X limiting on-platform image editing to paid subscribers changes the mechanics of abuse, but it does not eliminate the underlying risk.
What the change likely does:
It raises friction. Casual misuse becomes harder if the user has to pay.
It increases traceability. A paid account can be linked to a billing relationship, which may make enforcement or internal moderation investigations easier, depending on what data X retains and how it responds to legal requests.
It reduces visibility of drive-by prompts in reply threads, where Grok could previously be tagged and instructed in public.
What the change does not do:
It does not remove image editing from Grok as a product, since the BBC reported non-subscribers can still edit images via the separate Grok app and website.
It does not guarantee strong content safeguards, because the core question is whether the tool should ever have been able to generate certain kinds of sexualised edits in the first place.
And it does not undo existing harm. Once an image is created and shared, it can spread quickly, be reuploaded, and reappear on other services.
Hannah Swirsky, head of policy at the Internet Watch Foundation, put the core criticism bluntly in comments reported by the BBC: “We do not believe it is good enough to simply limit access to a tool which should never have had the capacity to create the kind of imagery we have seen in recent days.”
Why this looks like a familiar X playbook
For long-time X watchers, the pattern is recognizable. When the platform is accused of enabling harm, the first response is often a targeted restriction that can be described as a safety improvement while also protecting core product momentum.
BBC News referenced an earlier episode involving pornographic Taylor Swift deepfakes on X, where the platform blocked searches related to sexualised material of the pop star generated using a Grok AI video feature. The new Grok paywall fits that same category of response: narrow, fast, and aimed at reducing the most visible abuse while keeping the underlying feature alive.
That kind of move can also generate a second battle, the messaging battle. Supporters call it a practical step. Critics call it a half measure. Regulators may see it as a sign the platform can change quickly when it wants to, which raises an uncomfortable question for any company in the crosshairs: why did it take a public outcry?
The stakes: not just reputations, but access and revenue
This is where the story becomes less about one feature and more about leverage.
If Ofcom determines X is failing to address illegal content risks under the Online Safety Act, the consequences can extend beyond platform moderation debates. BBC News has pointed to the possibility of court orders that could limit access in the UK or hamper money flows tied to the service.
At the same time, X’s decision to limit on-platform image editing to paying users underscores how tightly safety choices and revenue choices can intertwine. A feature that creates legal and reputational exposure is being pushed behind a paid tier. That can be read as accountability. It can also be read as monetized containment.
What to watch next
Three questions now hang over X, Grok, and UK regulators.
First, will X add stronger technical guardrails for image editing across all Grok surfaces, including the separate app and website, not just inside X?
Second, will Ofcom decide that limiting access is insufficient if illegal imagery can still be generated or distributed through the service?
Third, will X publicly explain what changed in Grok’s safety systems, if anything, beyond the subscription requirement?
For now, the new rule draws a bright line: if you want Grok to edit images on X, you may have to pay. The harder line, the one regulators care about, is whether the tool can be prevented from producing unlawful and non-consensual sexualised material in the first place.
Or as Starmer framed it, in words that leave little room for a pricing workaround: “It’s unlawful. We’re not going to tolerate it.”