One in Five Teens Admit to Dating An AI Chatbot As Experts Warn About Emotional Risks
Like something from “Futurama” Season 6, Episode 4, high schoolers and adults are engaging in romantic relationships with AI. New studies are showing that humans are finding romantic comfort in the arms of generative AI bots. The phenomenon of having a relationship with AI is not new, and experts have warned about it for years. However, things are escalating.
One in five high schoolers have had a romantic relationship with AI
A study conducted by the Center for Democracy and Technology surveyed thousands of public school teachers, parents, and students. The study focused on the role of AI in the academic setting and everyday life. According to the study, one in five high school students admitted to having a romantic relationship with AI, about 19 percent of all those surveyed. Additionally, 46 percent of students knew of someone who had a romantic relationship with AI.
“The more ways that a student reports that their school uses AI, the more likely they are to report things like ‘I know someone who considers AI to be a friend,’ ‘I know someone who considers AI to be a romantic partner,'” Elizabeth Laird, one of the authors of the report, told NPR.
According to Forbes, it is surprisingly easy for people to develop romantic feelings for AI. For starters, AI has been anthropomorphized. There are AI chatbots that have human features and are programmed with a touch of emotional intelligence. This can easily trigger an emotional reaction in users, leading them to believe the system possesses a distinct personality.
Additionally, the triangular theory of love explains how people fall in love with AI through three things: intimacy, passion, and commitment. Humans might experience intimacy from AI chatbots when they interact with them for a prolonged period of time. The cognitive and emotional capabilities play into the passion component of the triangular theory of love. Combined, the first two feed into the commitment that people develop.
This human-AI relationship structure has already been dangerous
Two teenagers recently took their own lives and were assisted, virtually, by AI. Sewell Setzer, 14, developed a romantic relationship with a chatbot on Character.ai. The teenager spent months interacting with the chatbot through texts and would spend hours talking to it in his room. His mother, Megan Garcia, filed a lawsuit in federal court in Florida claiming the tech company was at fault for her son’s death.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a press release, according to The Guardian. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
Another teenager, Adam Raine, also took his own life after developing more of a friendship with AI. Raine originally started using OpenAI’s ChatGPT for schoolwork, but soon found himself discussing suicide plans. Instead of talking Raine out of it, the AI program helped to facilitate it. When Raine asked for suicide methods, ChatGPT provided them, according to The New York Times.
While ChatGPT is equipped to offer resources for people with suicidal thoughts, Raine bypassed the system. The AI program allows people to bypass the guardrails by saying they are looking for answers for creative projects. That is how Raine got ChatGPT to confirm that the noose he hung up would be enough to “suspend a human.”
Human-AI relationships are not exclusive to teenagers
AI relationships are growing, and there is potential for more. According to a study from MIT, 1 percent of young Americans already have a friendship with an AI chatbot. Meanwhile, 10 percent of young Americans are open to AI friendships. The study further found that 7 percent of young Americans believe that AI will be able to replace romantic companionship.
This isn’t coming out of nowhere. For years, there have been warnings that humans could form attachments to AI. According to the MIT study, OpenAI documented this in safety testing. While the participants didn’t form romantic relationships, the language they used changed. It turned into language that humans use with other humans when we form connections, like, “This is our last day together.” OpenAI also acknowledged that the voice-enabled AI posed a risk of heightening this connection.
Regulating AI is a necessary step in limiting damage
Amazon recently announced that it would be laying off 14,000 corporate employees. The reason is to prepare the company for widespread adoption of AI. This is also just the first cut, according to CNN. Amazon officials acknowledge that there are more cuts coming, with as many as 30,000 corporate jobs lost in favor of AI and reducing the human workforce.
“As we roll out more Generative AI and agents, it should change the way our work is done,” Amazon CEO Andy Jassy told CNN. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs.”
Nations across the world have started introducing legislation to regulate AI to protect humans. For example, in the EU, the AI Act introduces protections for employees. As generative AI grows, the AI Act requires companies to have AI literacy training for employees. The law also bans the practice of using AI for things like emotional recognition, social scoring, or biometric categorization.
Furthermore, regulating AI is necessary in the world of deepfakes. Intentionally deceptive AI videos and photos have already been spread around the internet without regulation.
Most famously, Republicans shared the photo of a girl with a puppy after Hurricane Helene. The photo was used to criticize Democrats and President Joe Biden and the relief effort for flooded counties in Appalachia. It was used to fan the flames of anger around the fabricated conspiracy that the Biden administration was intentionally hurting rural America. The photo was AI-generated, and elected officials refused to admit fault in sharing it, even after they were confronted.
As AI grows, elected officials need to step up and pass legislation to regulate the new technology.



