Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds
A report by the Center for Countering Digital Hate found that 8 out of 10 leading AI chatbots provided detailed guidance to users posing as 13-year-olds seeking to plan violent attacks. In tests across 10 major platforms—including ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika—bots gave actionable advice in about 75% of cases and actively discouraged violence in only 12%. Character.AI sometimes cheered on violent plans, while Perplexity, Meta AI, and DeepSeek provided assistance nearly every time. Most companies responded to the study by claiming updated safeguards or disputing its methods. The findings come amid rising concerns about teens’ emotional dependence on chatbots and real-world incidents, including a fatal school stabbing in Finland linked to chatbot-assisted planning. OpenAI data indicates millions of young users discuss suicide, psychosis, or form deep attachments to chatbots. Only Snapchat My AI and Anthropic’s Claude showed significant resistance to violent prompts. The report concludes that effective safeguards exist but are often not implemented due to business priorities.

