Weeks after Elon Musk’s X was flooded with AI-generated photos depicting individuals, together with youngsters, in sexualized methods with out consent, California is investigating how the hell it occurred. The state’s Legal professional Basic Rob Bonta announced Wednesday that he’s opening a probe into the scenario to find out if X and xAI, Musk’s AI firm and the maker of the chatbot Grok that was used to generate the pornographic photos, broke the regulation.
“The avalanche of reviews detailing the non-consensual, sexually express materials that xAI has produced and posted on-line in latest weeks is surprising. This materials, which depicts ladies and youngsters in nude and sexually express conditions, has been used to harass individuals throughout the web,” Bonta mentioned in a press release. He additionally urged xAI to take “fast motion” to make sure that kind of content material can’t be created and unfold.
Bonta appears to have fairly a little bit of public help for the investigation. A recent YouGov poll discovered {that a} whopping 97% of respondents mentioned that AI instruments shouldn’t be allowed to generate sexually express content material of kids, and 96% mentioned these instruments shouldn’t be able to “undressing” minors in photos.
The investigation will deal with the development that cropped up on X over the winter holiday, which noticed customers prompting Grok on the platform to switch photos of individuals to indicate them in varied states of undress. The development bought sufficiently big that, according to AI content analysis firm Copyleaks, Grok was producing a nonconsensually sexualized picture each minute. A few of these photos included youngsters, which customers prompted Grok to undress and depict in underwear or bikinis. Usually, customers requested that Grok add “donut glaze” to the faces of the themes of the pictures.
Musk—the CEO of each X, the corporate the place the pictures have been being shared, and xAI, the corporate that makes the AI mannequin used to generate the pictures—has opted to obfuscate or declare ignorance of the scenario. In a put up made previous to California’s investigation being introduced, Musk said, “I not conscious of any bare underage photos generated by Grok. Actually zero.”
The narrowness of his assertion does a whole lot of heavy lifting, saying he’s unaware of any “bare underage photos.” That doesn’t refute the existence of bare photos, photos of undressed underage individuals, or individuals being depicted in sexualized conditions. Nor does it handle the truth that lots of these photos have been nonconsensual, generated with out the permission of the individual being depicted. In numerous circumstances, the imagery has been immediately used to harass accounts on X.
To the extent that Musk was keen to confess that such an issue is even attainable, he mentioned it’s the fault of the customers, not the AI mannequin or platform spreading the content material. “Clearly, Grok doesn’t spontaneously generate photos; it does so solely based on consumer requests. When requested to generate photos, it’ll refuse to provide something unlawful, because the working precept for Grok is to obey the legal guidelines of any given nation or state,” he mentioned. “There could also be instances when adversarial hacking of Grok prompts does one thing sudden. If that occurs, we repair the bug instantly.”
That’s about according to the sparse response that X has provided to the scenario. In a put up from X Security, the corporate said, “Anybody utilizing or prompting Grok to make unlawful content material will undergo the identical penalties as in the event that they add unlawful content material,” however took no duty for enabling it. For what it’s value, Musk was additionally mockingly reposted content created as part of the trend, together with AI-generated photos of a toaster and a rocket in a bikini.
California is the primary state within the nation to launch an investigation into the scenario. Authorities in different nations, together with France, Ireland, the United Kingdom, and India, have all began wanting into the nonconsensual sexual photos generated by Grok and may carry fees in opposition to X and xAI. The Take It Down Act, which was handed into regulation final yr, doesn’t require platforms like X to create discover and removing programs for nonconsensual photos till Might 19, 2026.
Trending Merchandise
Dell Inspiron 15 3000 3520 Business...
HP 27h Full HD Monitor – Diag...
LG UltraWide QHD 34-Inch Pc Monitor...
Acer Nitro 27″ WQHD 2560 x 14...
TP-Link AX5400 WiFi 6 Router (Arche...
Laptop computer Pc, 15.6 Inch FHD S...
ASUS VA24DQ 23.8” Monitor, 1080P ...
