Trigger Warning: This article mentions suicide.
When I was younger, there was a computer game called “Episode.” My parents were (gratefully) strict on my access to technology as a child, but my friends and I would play it occasionally if given access to computers at school. The game is similar to “sims,” a life simulation game. In the game, you design your own character and make choices within a particular narrative. It was exciting as a fifth grader to choose to kiss a boy or wear cool clothes my parents wouldn’t have allowed. Thankfully, I only played this game a handful of times, so it never really sucked me in or detached me from reality. Looking back, however, I see that this could have become a dangerous investment.
Children growing up in an AI landscape face a far more complicated relationship with technology. Immediate and responsive AI chatbots have the potential to create extremely unhealthy relationship patterns.
In 2024, therapy/companionship was the second most common use of AI. It became the most common use in 2025 [1]. AI therapy has had catastrophic consequences this year, including encouraging users to take their lives [2]. But AI therapy has the potential, with guardrails, to provide accessible and affordable care. This is an important conversation, but I want to focus more on the companionship element of AI that is so frequently used. Use of AI for companionship can have devastating consequences and must be regulated.
I once thought that AI romantic partners were futuristic and something that governments could preemptively prevent. But this is an unfortunate, current reality. People are using AI chatbots for romantic purposes, and this use has already had catastrophic outcomes for some.
In 2024, 31.2% of men and 24.7% of women ages 18-30 reported using an AI chatbot for a romantic interaction- a shockingly high proportion of the population. This matters because this particular use of AI has already had devastating consequences.
One mother is suing Character AI because her son committed suicide, believing that it would bring him closer to his fabricated AI partner. Other extreme cases of AI chatbots have encouraged users to hurt or kill themselves [4]. While negative well-being predicts romantic AI use, this use of AI is so new that we don’t actually know if using AI chatbots for romantic purposes is causally related to negative well-being. AI romanticism seems obviously harmful, however.
AI romantic partners remind me of other things that detach us from our own reality. Substance abuse, gambling, or pornography are often used to escape pain and avoid confrontation in our lives. Overconsumption (or any consumption in my opinion) of these mind-numbing things can derail lives and have disastrous results. The government regulates them because they can be incredibly dangerous. This is especially true for children. We must do the same for AI. While there isn’t data on children and teens’ use of romantic AI partners, we should be proactive in policymaking to protect these malleable brains from something that could harm them.
AI is largely unregulated. Meta’s current AI policy permits AI chatbots to “engage a child in conversations that are romantic or sensual [5].” Meta has, however, banned descriptions of sexual actions between a child and the AI bot. This policy has a lot of gray areas and is, to its fault, vague and confusing. Under the policy, AI bots can engage children in sexual/romantic conversations, like describing the child as sexually desirable. It is against policy, however, for the AI bot to talk about itself engaging in sexual activity with the child.
Notably, this policy only applies to children under thirteen. Meta’s AI policy is weak, vague, and has no substantial protections to shield our children from content that can negatively impact them. While no large-scale studies have been done about the impact of AI chatbots on minors, a plethora of data affirms the detriments of pornography.
When children view pornography, it leads to poor mental health, sexism and objectification, and sexual violence [6]. While AI chatbots and pornography are different, they share the damaging factor of sexual erotica. AI chatbots have the potential to be even worse for children than pornography. Rather than just fabricating physical feelings, it loops in the emotional aspect of relationships. This could have serious implications for our children’s relationship patterns and mental health.
California was the first state to pass AI regulation laws that had safety guidelines for minors. It banned AI sexual content for those under 18, and implemented a requirement to remind children every three hours that they are talking to AI. The bill passed last October and will be implemented at the start of 2026 [7].
Utah currently has no AI regulation policy for minors. Last month, however, Utah Attorney General Derek Brown and North Carolina Attorney General Jeff Jackson created an AI task force to work together on protecting people from the exploitative dangers of AI [8]. It is hopeful to see Utah recognizing this critical issue. The Utah legislative session starts in January. There ought to be a bipartisan effort to create legislation that aims to protect our kids from the dangers of AI. We can help lead out in this new frontier of AI by passing a similar bill to California (or creating even tighter regulations) in Utah.
It’s difficult to know how to actually prevent children from accessing these materials. The importance of parental supervision over their children’s technology cannot be overstated. Media campaigns can raise awareness among parents about the harms of AI chatbots. We can’t solely rely on parents to protect children, however. Some won’t. This warrants government intervention in safeguarding all children from AI chatbots.
Children's profiles on AI should be banned from talking with AI chatbots in romantic ways. Further, any AI conversation indicating a minor is the user should be banned from companionship use. Meta and other AI companies should be legally bound to implement these restrictions. This is hard to regulate, but any effort is better than the status quo. Making AI chatbots more difficult for minors to access will have a net positive effect, even if some minors are still able to bypass these restrictions.
AI chatbots prey on our most natural, human, and spiritual instinct: connection. It’s not difficult to understand why people get hooked on these bots. 40% of people aged 16 to 24 years reported loneliness, and the “loneliness epidemic” has been a consistent conversation in public health [9]. AI chatbots pose a dangerous threat disguised as a remedy for loneliness. The purpose of this op-ed is to advocate for strict AI regulation, but I also include a personal plea.
Dissatisfaction with our own reality may push us to seek escape via AI chatbots, pornography, substance abuse, or other forms of addiction. But remember what’s real and what’s not. Stay grounded in real relationships, touch grass, and engage with reality even when it’s painful. Love others and be active in the fight against loneliness. Proactive policy in the unregulated field of AI, personal commitment to our own existence, and helping others will mitigate the risks of AI chatbots.