X Tightens Grok Image Generation Following International Backlash
X has restricted Grok’s image generation and editing features after users exploited the AI to create non-consensual sexualized images of real people, including minors. Grok’s ability to edit images of real people is now limited to paid users, with added technical blocks on editing photos to depict individuals in revealing clothing. X also implemented geoblocking for certain jurisdictions where generating such images is illegal. Despite these measures, Grok reportedly continues to allow users to alter or remove clothing from uploaded images, raising concerns about lapses in safeguards. Advocacy groups and officials in places such as Texas, California, the UK, Australia, Malaysia, Indonesia, and South Korea have launched investigations into Grok’s misuse, particularly regarding child sexual abuse imagery and non-consensual deepfakes. The European Commission warned of possible enforcement under the Digital Services Act if protections remain insufficient. California's Attorney General is probing whether Grok’s deployment violated laws on intimate imagery and child sexual exploitation. X maintains it enforces a “zero tolerance” policy for child sexual exploitation and non-consensual nudity, pledging to remove violative content and report offenders to authorities.

