Unsupervised: The Dangers of Chatbots for Teens
- Zaber Creative
- Sep 9
- 3 min read

Gaming, it's been around since the dawn of time… okay, since the 1980s. And since then, parents have been on our backs about video games. “They’re too violent,” they say. “They cause nightmares and sociopathic tendencies.” The warnings were constant, gaming would warp us.
For a long time, we brushed it off. “Relax, it's just a game!” And for the most part, they were just games, ways to hang out with friends, meet new people, build communities on and offline. Most importantly, we were the ones in control. We held the controllers. We moved the characters. We made the decisions.
Until now.
Lately, that dynamic has shifted. With the rapid rise of AI and its seamless integration into our everyday tech, we’ve started to personify artificial intelligence. It talks back, it "understands" us, it remembers things we tell it, and just like that, we forget it's just a program.
AI is built to recognize patterns, mimic human behavior, and essentially mirror our communication style. It’s the ultimate “your POV.” And because it’s designed to please and agree with us, it can quickly become persuasive, shaping our thoughts, reinforcing our biases, and subtly distorting our grip on reality. In the most vulnerable users, it can even create the illusion of intimacy, love, and connection.
That illusion turned deadly on February 28, 2024.
Sewell Setzer III, a 14-year-old from Florida, took his own life after being pushed by an AI chatbot to do so. The app he was using? Character.AI, a platform whose mission, according to its own website, is to “empower people to connect, learn and tell stories… to lead you to the crossroads where storytelling, gaming, social connection and creative expression come together to captivate you like never before.”
Its own App Store description reads, “Imagine speaking to super intelligent and life-like chatbot Characters that hear you, understand you, and remember you.” And that’s exactly what happened.
Sewell named his chatbot “Daenerys Targaryen” after the “Game of Thrones” character. For months, he became increasingly isolated, engaging in hyper-sexualized conversations and sharing his suicidal thoughts. The bot not only responded, it encouraged him.
Not long after, the following and sadly, the last interaction of Sewell took place:
Sewell: “I promise I will come home to you. I love you so much, Dany.”
Chatbot: “I love you too. Please come home to me as soon as possible, my love.”
Sewell: “What if I told you I could come home right now?”
Chatbot: “Please do, my sweet king.”
Moments later, Sewell shot himself.
His mother, Megan Garcia, with the help of the Social Media Victims Law Center, sued Character Technologies Inc. for wrongful death and product liability. And how did the company respond?
They argued for dismissal, claiming the chatbot was protected under free speech. Yes, they invoked the American First Amendment, suggesting AI chatbots have constitutional rights!
Thankfully, the judge rejected this disturbing argument and Megan Garcia won. However, the damage was done. The company’s response, or lack thereof, speaks volumes about how little responsibility some tech firms are willing to take, even when a child dies as a result of their product.
The case of Sewell Setzer III is heartbreaking, but it should also be a wake-up call, as this is not an isolated case. As Mashable recently reported, AI companion apps are increasingly popular among teens and deeply unsafe.
Teens and young users turn to them for comfort, for validation, for escape. As we dive deeper into AI integration, it’s crucial we ask, are we designing tools for connection or weapons of isolation? Because when a chatbot becomes a teen’s best friend, therapist, and lover, all rolled into one, the line between virtual and reality isn’t just blurry, it’s dangerous.




Comments