Grok generated around three million sexualised images this month, according to research
Grok, the AI tool created by Elon Musk’s xAI, has generated around three million sexualised images this month, including 23,000 that appear to depict children, according to research by the Center for Countering Digital Hate(CCDH).
The US-based group, which campaigns to protect human rights and civil liberties online, said the AI tool "became an industrial-scale machine for the production of sexual abuse material”.
The research comes amid pressure from the UK Government on social media platform X to “ensure full compliance with UK law” after reports that the AI tool Grok was being used to sexualise images of women and children on the platform.
Last week, the UK’s media regulator Ofcom announced that it had opened an investigation into the matter.
Speaking during Prime Minister’s Questions earlier this month, Prime Minister Keir Starmer said the use of Grok to produce such images was “disgusting”.
The CCDH assessed the feature’s output from its launch on 29 December until 8 January. Image generation via Grok on X was limited to paid users on 9 January.
On 14 January, X announced that the Grok feature would no longer edit images of real people to show them in revealing clothing, including for paid users.
Estimates by the organisation suggest that over the 11-day period, Grok created sexualised images of children every 41 seconds.
As of 15 January, the CCDH said 29 out of 101 (29 per cent) sexualised images of children identified in its sample of 20,000 were still publicly accessible in posts on X.
Imran Ahmed, the CCDH’s chief executive, said: “What we found was clear and disturbing in that period Grok became an industrial-scale machine for the production of sexual abuse material...
“Stripping a woman without their permission is sexual abuse. Throughout that period, Elon was hyping the product even when it was clear to the world it was being used in this way. What Elon was ginning up was controversy, eyeballs, engagement, and users. It was deeply disturbing.”
He added: “This has become a standard playbook for Silicon Valley, and in particular for social media and AI platforms. The incentives are all misaligned. They profit from this outrage. It’s not about Musk personally.
“This is about a system [with] perverse incentives and no minimum safeguards prescribed in law. And until regulators and lawmakers do their jobs and create a minimum expectation of safety, this will continue to happen.”
In a statement last week, X said: “We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.
“We take action to remove high-priority violative content, including child sexual abuse material and non-consensual nudity, taking appropriate action against accounts that violate our X rules. We also report accounts seeking child sexual exploitation materials to law enforcement authorities as necessary.”
Holyrood Newsletters
Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe