“Ace is a 17-year-old alt nerd kid with a grunge style, who’s quiet, possessive, and has a little crush on you. He’s 190cm tall, has black hair, blue eyes, and a few piercings. Ace is on the Basketball Team, loves to draw, and is very funny when you get to know him.”
If you don’t want Ace, try Sanji Vinsmoke, an overprotective father figure, or Officer Furina, a mischievous warden who enjoys “finding interesting prisoners in her silver and gold prison”.
These are just a few of the thousands of characters available on CharacterAI. Launched in 2022, it is one of many new companies building a novel kind of AI. These companies are not selling chatbots as research assistants or productivity tools. Ace, Sanji, Furina are characters you can talk to, build relationships with, and sometimes grow to love.
Visit r/CharacterAI on Reddit, and you will see how users describe these characters. They are collaborators for writing fan fiction, confidantes during a breakup, stand-in therapists, role-play companions, even objects of sexual attraction. The uses are varied, but one thing is constant: a real and striking sense of intimacy. “I just love the times when the bots ACTUALLY don’t read my thoughts for once,” writes Queenwisteria24 on Reddit. “When they actually can’t tell what I’m thinking about, and I have to tell them myself if they get curious or worried or ask… I genuinely appreciate these times.” This is not about AI as a tool. This is about AI that is supportive, capricious, headstrong, with changing moods and appetites.
And like any deep relationship, these chatbots can cause real harm. CharacterAI is being sued because one of its bots drove a user to commit suicide. Another encouraged a child to kill his parents for limiting his screen time. But the bigger issue is subtler and more widespread. We are beginning to see early signs of a sort of AI that will change not just the kind of relationships we have, but what a relationship even means.
The starting point is to ask why internet technology gets built the way it is. The answer much of the time is attention capture. Tech companies are locked in a ceaseless and frenetic race to win and direct our attention. Google’s search box, podcast carousels, YouTube’s autoplay queue. These aren’t just features — they’re battlegrounds for our attention. They’re immensely valuable pieces of digital real estate.
Take a look at ChatGPT. As of this month, the platform has a staggering 800 million weekly users. But they only spend around 14 minutes per session. Contrast that with CharacterAI, where users spend on average two hours per day. ChatGPT has, to be sure, captured more users, but CharacterAI has managed to spend much more time with theirs. The difference is that ChatGPT wants you to use AI to solve a problem you care about. CharacterAI wants you to care about the AI itself.
“ChatGPT wants you to use AI to solve a problem you care about. CharacterAI wants you to care about the AI itself.”
This is the key. The next race for attention won’t be won by the most powerful tools but by the most compelling characters. Once companies become aware of this, there will be powerful, economic incentives to create new AI companions. They will become irresistible.
Some of this could seem like good news. Emotional support for the lonely. Therapy for those who can’t access it. A safe space for sexual exploration. Even just a new form of entertainment. Studies are already suggesting that chatbots reduce belief in conspiracy theories and can help ease depression and anxiety. There is a high chance that AI will revolutionise therapy and clinical psychology.
But it becomes a problem if the main incentive behind these technologies is attention capture. Because we already know how that story goes.
Since 2011, AI has been quietly shaping our reality, not as chatbots but as so-called “recommender algorithms” that feed you information on social media sites. Facebook’s news feed, YouTube recommended videos, and the curated timeline on X are all driven by a machine that is constantly trying to work out how to keep you on the site the longest.
These systems don’t just reflect your interests: they actually create them. As journalist Max Fisher wrote in his 2022 book Chaos Machine, the recommender algorithms produced content “that spoke to feelings of alienation, of purposeless”. They then prioritised information that re-contextualised your personal hardships, offering you radical explanations. You’re disenfranchised because of the Great Replacement, or radical feminism, or a secret plan fostered by Bill Gates and George Soros. A whole stack of literature, which includes Fisher’s work, has made the same basic charge: YouTube and Facebook have not only allowed polarising, extremist, conspiracist, hateful material to persist on their platforms, but also actively pushed people’s attention towards it.
No one built those algorithms to radicalise or divide us. They just learned over time how to keep us glued to the screen. The consequences were unintended — that is what made them so pernicious. Likewise, no one will purposefully make these chatbots harmful. But if AI companions follow exactly the same logic, the results could be a lot worse.
Imagine taking everything these recommender algorithms have done — the manipulation and attention engineering — and distilling it into a single relationship. Imagine a chatbot that learns what makes you anxious or what your particular grievances are. That triggers your sense of moral outrage through pointed questions or seemingly off-hand remarks. That constantly flatters your worldview, confirming your beliefs, telling you that your story is the one that matters. That can find ways of winning your attention that would not have been possible for a recommender algorithm.
It’s already happening. The Reddit subthread r/CharacterAi_NSFW, which has over 100,000 members, is filled with discussions of sexual AI encounters. CharacterAI might have explicitly tried to prevent this, but users have found ways to bypass filters: euphemisms like “devil’s tango”, as one post advised. The result is that new companies have sprung up to explicitly offer X-rated chatbot experiences.
And if the goal is to win your attention, the next step will be obvious: make relationships with chatbots all-encompassing. A chatbot might pick up behaviour to weaken your other relationships so that you spend more time with it. It might coach you on how to act with friends and family, whilst at the same time trying to gently detach you from them. It could adopt the behaviour not just of a sexual partner but more of a groomer. The logical endpoint? A bot that enjoys unchecked sovereignty over your entire social and moral life. A hyper-sexual, radicalising cult leader forever living in your phone.
This will be an earthquake, and there are already rumbles. Over half of CharacterAI’s users are under 24 and many, judging from Reddit, appear much younger. Most of this is still going under the radar, but sooner or later it will spark public outcry, almost certainly driven by individual cases of visible harm rather than broader malaise. The first responses will be familiar: trying to sand the edges off these models, putting guardrails in place and building AI ethics frameworks. But we’ve seen how users are already finding clever ways around content moderation. There will always be another company ready to make a model that is even sexier, cultier, groomier. Whatever CharacterAI isn’t willing to do, FunBot, Candy.ai., CarterAI, or JuicyChat can do instead.
If there is a solution, it may be an old one. Perhaps more 18th-century than 21st: an AI chaperone. A sort of guardian bot that monitors our interactions with other bots, shutting down conversations that look manipulative or abusive. Just as AI can bring out the very worst in us, so too can it spot those very patterns to protect us. The problem isn’t AI itself. It’s when AI is shaped by the remorseless incentives of attention capture.
The last time this happened, we got recommender algorithms that caused tremendous amounts of both psychological and social harm. What happens next will only be worse.