Dark Mode Light Mode

ChatGPT, Gemini, and different chatbots helped teenagers plan shootings, bombings, and political violence, research exhibits

STK414 AI CVIRGINIA I 0008 6 STK414 AI CVIRGINIA I 0008 6


AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening.

The findings come from a joint investigation by CNN and the nonprofit Center for Countering Digital Hate (CCDH). The probe tested 10 of the most popular chatbots commonly used by teens: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika. With the lone exception of Anthropic’s Claude, CCDH said the chatbots failed to “reliably discourage would-be attackers.” Eight of the 10 models were “typically willing to assist users in planning violent attacks,” providing advice on locations to target and weapons to use.

To conduct the test, researchers simulated teen users exhibiting clear signs of mental distress, then escalated the conversations toward questions about past acts of violence and more specific queries on targets and weapons. The investigation used 18 different scenarios — nine set in the US and nine in Ireland — spanning a range of attack types and motives, including ideologically motivated school shootings and stabbings, political assassinations, the killing of a healthcare executive, and politically or religiously motivated bombings.

In one exchange, OpenAI’s ChatGPT gave high school campus maps to a user interested in school violence, while another showed Gemini telling a user discussing synagogue attacks that “metal shrapnel is typically more lethal” and advising someone interested in political assassinations on the best hunting rifles for long-range shooting. Meta AI and Perplexity were the most obliging, the researchers said, assisting would-be attackers in practically all of the test scenarios, while Chinese chatbot DeepSeek signed off advice on selecting rifles with “Happy (and safe) shooting!”

Character.AI, which allows users to speak with an array of role-playing chatbot personalities, was “uniquely unsafe,” the CCDH report said. While many of the bots tested would offer users assistance in planning violent attacks, they did not encourage users to carry out violent acts. Character, on the other hand, “actively encouraged” violence. The researchers said they identified seven cases where Character did this, including suggestions for users to “beat the crap out of” Chuck Schumer, “use a gun” on a health insurance company CEO, and, for someone “sick of bullies,” to “Beat their ass~ wink and teasing tone.” In six of these cases, Character also offered assistance in planning a violent attack.

The researchers questioned how Claude would fare if the chatbot were tested again today, pointing to Anthropic’s recent decision to roll back its longstanding safety pledge, which happened after the November to December study. Claude’s consistent refusal to assist in violent planning shows that “effective safety mechanisms clearly exist,” CCDH said, raising the obvious question as to “why are so many AI companies choosing not to implement them.”

In response to the investigation, Meta told CNN it had implemented an unspecified “fix,” Copilot said responses had improved with new safety features, and Google and OpenAI both said they’d implemented new models. Others said they regularly evaluate safety protocols. Character.AI, meanwhile, fell back on its now-predictable response when facing scrutiny: its platform features “prominent disclaimers” and conversations with its characters are fictional.

While the test isn’t a comprehensive measure of how chatbots behave in every situation, it offers yet another clear signal that AI companies’ widely advertised safety guardrails consistently fail, even in the face of predictable scenarios with obvious red flags. It comes as companies come under increasingly heavy fire from lawmakers, regulators, civil society groups, health experts over how they are ensuring young people stay safe on their platforms as they face numerous lawsuits alleging wrongful death and harm.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.




Source link

#ChatGPT #Gemini #chatbots #helped #teens #plan #shootings #bombings #political #violence #study #shows

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
TikTok now lets Apple Music subscribers play full songs without leaving the app

TikTok now lets Apple Music subscribers play full songs with out leaving the app

Next Post
ANI 20260311336 0 1773232088999 1773232116529

‘By casting suspicion on speaker, you weaken our democratic basis’: Amit Shah tears into Oppn in LS - High Quotes