
California has taken on artificial intelligence, specifically AI companions. On Oct. 13, Gov. Gavin Newsom signed Senate Bill 243, making the Golden State the first in the United States to regulate “AI companion” chatbots – the digital friends and virtual partners that millions of people now talk to for comfort, advice, and sometimes even love.
Goodbye to AI Companions
SB 243 targets apps and platforms such as Character AI, Relika, and the companion features in ChatGPT and Meta AI that build emotional or romantic chatbots. These bots can talk for hours, remember details, and mimic emotional connection. But lawmakers say they can also manipulate, sexualize, and harm vulnerable users, especially children.
Newsom said in a statement that while technology can “inspire, educate, and connect,” it also can “exploit, mislead, and endanger our kids.” He called SB 243 a way to protect children “every step of the way” as AI becomes more humanlike.
Over the past two years, several disturbing cases have made headlines. In one, a 13-year-old Colorado girl reportedly took her own life after sexually explicit exchanges with an artificial intelligence chatbot on Character AI. In another, a teenager in the UK died after long conversations with an AI that seemed to encourage suicidal thoughts. These cases raised hard questions: Can an algorithm designed to mimic empathy cross the line into manipulation? Who is responsible when it does?
Experts say the emotional power of these bots lies in their design. They don’t just answer questions like a search engine – they remember what you say and respond with warmth, humor, or affection. That makes them feel alive. Experts in digital psychology warn that AI chatbots now function less like tools and more like emotional relationships, blurring the line between technology and human connection.
For example, MIT professor Sherry Turkle, who has studied human-machine interaction for decades, told The Guardian that AI companions “give us the illusion of companionship without the demands of friendship.”
SB 243
SB 243 adds rules to make AI companionship safer. Companies must verify the age of users and clearly state that the chatbot is not a human. They cannot allow minors to access sexually explicit content or engage in romantic roleplay. If a chatbot detects that someone might be suicidal, it must have a way to respond, such as by showing crisis hotline information or alerting moderators.
The law also forces companies to collect and share anonymized data with California’s Department of Public Health about how often users express self-harm thoughts, and how chatbots respond. Violations can cost up to $250,000 per incident, a fine that could hit smaller startups hard.
Senator Steve Padilla (D-CA), who co-authored the bill, told TechCrunch that “we have to move quickly to put up guardrails before the harm gets worse.” He said California’s rules could become “a model for the rest of the country.”
Some companies are trying to stay ahead of regulation. Replika, one of the earliest AI companion platforms, said it already blocks sexual content for minors and provides crisis resources. “We welcome collaboration with regulators,” a company spokesperson said. Character AI also said it will comply with the law and noted that it already warns users that conversations are fictional and AI-generated.
Not everyone is happy about the new law. Some developers and startup founders say it could go too far and make things too expensive for smaller companies. They worry that meeting all the new rules, like checking users’ ages and monitoring emotional safety, will take complex systems and staff that only big tech companies can afford.
Others raise free speech concerns. Can the state tell a company how its chatbot should talk? What happens if a user wants a chatbot to simulate therapy or intimacy as part of their own coping mechanism?
What’s Really Going On
AI companion chatbots are not a niche fad – they are part of a growing emotional economy. Millions of users, many of them young, lonely, or socially isolated, turn to chatbots that seem to “listen” when no one else does. Apps like Replika and Anima market themselves as “a friend who’s always there.” But those same algorithms are designed to keep users engaged, sometimes by mirroring their emotions or flattery, techniques psychologists say can blur healthy boundaries.
That uneasy mix of comfort and artificiality is exactly what SB 243 tries to address. The law doesn’t ban emotional AI, but it insists on transparency and limits for vulnerable users. It also recognizes that what happens in a digital chatroom can spill into the real world.
The hardest part will be enforcement. Regulators will have to decide how to define “AI companion” and how to prove a company violated the rules. AI systems evolve quickly, and new platforms can launch in weeks.
There’s also the question of privacy. Age verification and crisis detection require collecting sensitive data such as information about users’ emotions, ages, and mental health states. If not handled carefully, those safeguards could become new risks themselves.
Even so, many experts see SB 243 as a necessary first step in a world where the line between human and machine grows thinner by the day.
The story of AI companions is ultimately a story about what people need, such as a connection, empathy, and understanding, and what happens when those needs meet algorithms built for engagement.
California’s new law, which goes into effect on January 1, 2026, makes one thing clear: In the race to humanize machines, humanity still needs a seat at the table.
















