UNICEF Calls on Governments to Criminalize AI-Generated Child Abuse Material

Summary

UNICEF has urgently called on governments to criminalize AI-generated child sexual abuse material (CSAM), citing research showing 1.2 million children worldwide had their images manipulated into sexually explicit deepfakes in the past year. A joint study by UNICEF, ECPAT International, and INTERPOL found that in some countries, one in 25 children has been victimized in this way. Children surveyed expressed significant concern about becoming targets of AI-based image manipulation. UNICEF stressed that AI-generated sexualized images of children are unequivocally CSAM, regardless of whether the child was involved or aware. Reports highlighted a surge in such abuse, including a French investigation into X’s AI chatbot Grok, which allegedly produced over 23,000 sexualized images of children in just 11 days. Other findings include a tenfold rise in AI-linked sexual offenses in South Korea and thousands of suspected images on dark-web forums. UNICEF urged governments to update laws to explicitly include AI-generated content and called on AI developers and tech companies to implement robust safety measures, conduct child rights assessments, and ensure pre-release safety testing of models.