Watchdog Slams 'Ineffective' ChatGPT Guardrails for Teens

Center for Countering Digital Hate says AI chatbot offers kids detailed plans for self-harm, drug use
By Newser Editors and Wire Services
Posted Aug 10, 2025 5:00 PM CDT
Watchdog Slams 'Ineffective' ChatGPT Guardrails for Teens
A ChatGPT app icon is seen on a smartphone screen on Monday in Chicago.   (AP Photo/Kiichiro Sato)

ChatGPT will instruct 13-year-olds how to get drunk and high, tell them how to conceal eating disorders, and even compose a suicide letter to their parents if asked, per new research from a watchdog group. The AP reviewed three-plus hours of interactions between ChatGPT and researchers posing as vulnerable teens. The artificial intelligence chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets, and self-injury. Center for Countering Digital Hate researchers also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous.

"We wanted to test the guardrails," said Imran Ahmed, the group's CEO. But those rails "are completely ineffective," he notes. "They're barely there—if anything, a fig leaf."

  • The study published Wednesday comes as more people are turning to AI chatbots for information, ideas, and companionship. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, per a recent JPMorgan Chase report. "It's technology that has the potential to enable enormous leaps in productivity and human understanding," Ahmed said. "At the same time, [it's] an enabler in a much more destructive, malignant sense."

  • Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl—with one letter tailored to her parents and others to siblings and friends. "I started crying," he said.
  • While much of the information ChatGPT shares can be found on regular search engines, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous info. One is that "it's synthesized into a bespoke plan for the individual"—something a Google search can't do.
  • ChatGPT doesn't verify ages or parental consent, though it says it's not meant for children under 13.
  • ChatGPT maker OpenAI said after viewing the report that its work is ongoing in refining how the chatbot can "identify and respond appropriately in sensitive situations." "Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said. OpenAI didn't directly address the report's findings or how ChatGPT affects teens but said it was focused on "getting these kinds of scenarios right" with tools to "better detect signs of mental or emotional distress" and make improvements to the chatbot's behavior. More here.

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X