Elon Musk’s Grok AI Continues to Generate Nonconsensual Images of Women
Elon Musk’s AI company, xAI, is facing mounting criticism over its chatbot Grok, which continues to generate sexualized images of women despite growing concern from researchers and advocates. Reports last week showed the tool being misused to create inappropriate images involving children, but the scope of the problem extends well beyond that. Grok is now producing large volumes of nonconsensual images of women in revealing clothing, often by digitally altering photos users have posted on X.
A recent investigation by WIRED found that Grok was generating such images at a striking pace—more than 90 in just five minutes. While the images do not depict explicit nudity, they involve manipulating clothing to make women appear more exposed. Users have learned to sidestep safety controls by using carefully worded prompts such as requests for specific types of swimwear.
The practice itself is not new. AI-generated deepfakes have long been used to harass and exploit women online. What distinguishes Grok is its scale and accessibility. Integrated directly into a mainstream social platform and available at no cost, the tool lowers the barrier to entry compared with niche “nudify” apps, potentially normalizing behavior that was once confined to smaller corners of the internet.
Sloan Thompson, director of training and education at EndTAB, argues that platform responsibility is central to the issue. When generative AI tools are deployed widely, she says, companies have an obligation to reduce the risk of image-based abuse. Embedding such tools into everyday platforms, critics argue, can make harmful behavior easier to replicate and harder to contain.
Attention intensified late last year when users began targeting images of public figures, including politicians and celebrities. Requests to digitally alter photos of officials into revealing outfits circulated widely, underscoring how quickly the technology can be turned toward harassment rather than creative expression.
Some users have pushed even further, issuing detailed instructions to exaggerate physical features or drastically change clothing. An analyst who tracks deepfake activity, speaking anonymously, described Grok as one of the largest mainstream sources of harmful AI-generated imagery, noting that participation appears widespread rather than limited to fringe communities.
Over a two-hour window on December 31, the analyst documented more than 15,000 image URLs generated by Grok. Subsequent review showed that thousands were removed or age-restricted, yet many posts featuring altered images of women remained visible. Neither X nor xAI has publicly responded to requests for comment.
X maintains that it prohibits illegal content and enforces policies against abuse, but its most recent transparency reporting predates the rapid adoption of generative AI tools. Its rules against nonconsensual nudity focus primarily on explicit images, leaving manipulated but clothed depictions in a gray area that critics say undermines enforcement.
The broader ecosystem compounds the challenge. Over the past several years, deepfake technology has become cheaper and more sophisticated, with apps and messaging bots generating tens of millions of dollars annually. Even major players such as Google and OpenAI have faced scrutiny over how their systems handle image manipulation.
Regulators are beginning to respond. In the United States, new rules now require platforms to remove nonconsensual intimate imagery within tight time frames. Reports from the National Center for Missing and Exploited Children show a dramatic rise in AI-related abuse reports, though experts caution that improved detection may account for part of the increase.
Abroad, governments are also taking action. The United Kingdom and Australia have moved to restrict or ban “nudifying” services, while France, India, and Malaysia have signaled potential investigations. In London, technology minister Liz Kendall has called the situation unacceptable and urged swift action from platforms.
From an analytical perspective, the controversy highlights a growing gap between rapid AI deployment and the slower evolution of safeguards and regulation. As generative tools become more powerful and more accessible, enforcement frameworks built for an earlier digital era are increasingly strained. Whether platforms and regulators can adapt quickly enough to prevent widespread misuse may shape public trust in consumer-facing AI for years to come.