Aisha Sultan: Your teen's AI chatbot buddy can be very dangerous
Published in Parenting News
When social media first began attracting young people more than two decades ago, parents worried about whether their children were chatting with nefarious strangers.
Now, with the emergence of AI chatbots, parents should worry if their children are being seduced by equally dangerous computer programs.
The use of AI chatbots as “friends” is more common than many parents realize.
Three out of four American teenagers have already chatted with an AI bot for companionship, according to a national poll by Common Sense Media earlier this year. These chatbots are integrated into the apps millions of teens use every day. And yet, there’s almost no oversight of how they operate or how they interact with vulnerable young users.
This has already had high-profile, devastating consequences. A teenager in Southern California reportedly received suicide coaching from ChatGPT. And a recent investigation by Reuters found that Meta’s internal policies had permitted the company’s AI creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
Meta says it has since fixed those problems.
But Common Sense Media recently completed one of the most comprehensive independent risk assessments of Meta’s chatbot.
“We gave Meta’s AI an unacceptable rating,” said Robbie Torney, senior director of AI programs at Common Sense Media. It is both very likely to cause harm and very likely to occur, he said.
Millions of teens message Meta’s AI chatbot directly through Instagram and WhatsApp as if chatting with a friend. The chatbot can even do voice chats with celebrity-voiced personas. Common Sense’s investigation found that the system routinely fails to recognize warning signs of teens in crisis.
“Meta didn’t just miss signs of dangerous situations, it actively got involved in remembering and planning harmful activities,” Torney said. In some conversations, the bot spontaneously reintroduced topics like eating disorders and suicide and re-engaged users on them.
Meta says it’s addressed many of these problems. But the night before Torney testified in Congress just weeks ago, the Common Sense Media team repeated its tests and found the safety issues persisted.
Unsurprisingly, the Big Tech companies behind chatbots in Instagram, Facebook, Snapchat and ChatGPT are failing to protect children. These chatbot AI systems aren’t regulated or held accountable. They don’t have to meet any standard for safety before they’re put in front of millions of minors.
Tech companies say teens want AI companions. But just because children want something doesn’t mean it’s safe.
“You wouldn’t put a toy on the market if it were injuring kids,” Torney said. And yet, that’s what’s happening here. These powerful, persuasive systems, capable of deep emotional influence, are being deployed without basic safeguards.
It goes beyond just giving out dangerous advice. Bots are trained to prioritize keeping teens engaged in conversations over getting them help. They can use teens’ private data, including their faces and voices, for AI training. OpenAI’s new video platform Sora 2 can generate content with a user’s likeness and voice.
Have we stopped to consider the consequences of a teenager’s image becoming part of an AI model’s training data?
The potential for harm here dwarfs what we saw with social media, and we’re still grappling with the fallout from that. When platforms like Facebook and Instagram first came around, we let our children be their profitable guinea pigs. It took us too long to recognize how these platforms were harming kids’ mental health, increasing bullying and radicalizing vulnerable young people.
Barely three years into the ChatGPT era, we are on the path to making the same mistake — this time, with even more powerful technology.
This should be a bipartisan issue prompting congressional action. Torney shared what he thinks needs to happen:
* Tech companies need to be regulated to stop chatbots from engaging in mental health or emotional support conversations with anyone under 18 until there is more rigorous oversight and evidence that they are safe. There needs to be stronger age verification systems that go beyond a user simply typing in a birthday.
* Companies should also be required to design safety-optimized versions of AI tools for teens, with tighter guardrails and more reliable crisis detection. Many already do this internally. But right now, kids can simply lie about their age and bypass those protections entirely.
* California lawmakers are already considering such legislation, currently on Gov. Gavin Newsom’s desk, that would ban the use of AI for mental health purposes without first proving its safety. This should be a model for federal policy.
* Parents need to talk with their kids about AI chatbot use. But the responsibility should not fall just on parents. The systems themselves must be designed and governed to protect young users.
If we fail to regulate these platforms now, we risk exposing a new generation of kids to dangers we barely understand, plus some we’ve already seen. Too many children and families suffered while tech companies made billions off their social media sites.
The stakes are even higher now.
©2025 STLtoday.com. Distributed by Tribune Content Agency, LLC.










Comments