Melissa Heikkilä in London
Elon Musk’s AI company xAI has restricted the use of its controversial AI image generator Grok to paying subscribers, following a growing outcry over its use to spread sexualised deepfakes of women and children.
The start-up announced the move on Friday morning, days after it emerged that the chatbot had been used to create explicit images of people without consent.
Those revelations have led lawmakers in the EU, France and the UK to threaten the platform with fines and bans unless it took action.
“Image generation and editing are currently limited to paying subscribers,” Grok posted on X. xAI did not immediately respond to a request for further comment.
Grok has been intentionally designed to have fewer content guardrails than competitors, with Musk calling the model “maximally truth-seeking”. The company’s chatbot also includes a feature that allows users to generate risqué images.
UK Prime Minister Sir Keir Starmer promised to take action against X on Thursday, urging the social media platform to “get their act together” and stop its AI chatbot tool Grok from producing sexualised images of children.
After xAI limited access to Grok on Friday, a Downing Street spokesperson said that the move “simply turns an AI feature that allows the creation of unlawful images into a premium service. It’s not a solution. In fact, it’s insulting to victims of misogyny and sexual violence.”
The European Commission has ordered X to retain internal documents relating to Grok until the end of the year. French ministers have also reported the sexual images that Grok has generated to prosecutors and media regulators.
On January 3, Musk posted on X that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
The rise of generative AI has led to an explosion of non-consensual deepfake imagery, thanks to how easy it is to use the technology to create such images.
The Internet Watch Foundation, a UK-based non-profit, said AI-generated child sexual abuse imagery had doubled in the past year, with material becoming more extreme.
While xAI said it had taken down illegal AI-generated images of children, the latest incident will raise further concerns about how easy it is to override safety guardrails in AI models. The tech industry and regulators have been grappling with the far-reaching social impact of generative AI.
In 2023, researchers at Stanford University found that a popular database used to create AI-image generators was full of child sexual abuse material.
Laws governing harmful AI-generated content are patchy. In May 2025, the US signed into law the Take It Down Act, which tackles AI-generated “revenge porn” and deepfakes.
The UK is also working on a bill to make it illegal to possess, create or distribute AI tools that can generate child sexual abuse material, and to require AI systems to be thoroughly tested to check they cannot generate illegal content.
Additional reporting by David Sheppard