The crisp, spotless blue sky is dimming on what feels like one of the first real days of autumn in downtown Vancouver, and Robson Square is filling up. In one corner, a solitary breakdancer in a grey knit beanie stalls out in the middle of an impossible upside-down spin. Three teenage girls in oversized T-shirts are rehearsing K-pop next to a Bluetooth speaker. At the far side of the rink, a DJ booth has sprung up, and a young woman in white sneakers with a black sweatband holding back her wavy brown hair is shuffling across the floor—skittering to the bass beat like a grain of salt over the surface of a struck steel drum.
I’m sitting on the concrete steps of the law courts, watching one of the most vibrant third spaces in the city, and thinking about artificial intelligence—and how it is quietly changing the social reality of the world I live in. Here in the concrete amphitheatre of Robson Square, people of a spectacular range of ages, cultures, and languages show up almost every day of the week to dance, together, in person: salsa, hip-hop, shuffle, tango. This dance floor, covered with human bodies in motion together, is a good vantage to reflect on what seems like its antithesis—a new kind of solipsism that is, suddenly, all around us.
In the last few months, many people I know have told me they are having conversations with artificially intelligent chatbots. Some of these conversations are entirely technical, if startlingly complex. The rural worksite manager who mentions offhand he has ChatGPT write his safety reports. The office worker who has an AI draft grant proposals. But many of them are also personal. The acquaintance who mentions that AI is her new therapist, or another who leans on his AI companion for motivation at the gym.
The drip drip drip of disclosures is slow and a bit bashful, but it’s enough to realize that no small number of people my age have what I might cautiously call social relationships with artificial intelligence. This aligns with what limited data we have on AI usage. A recent Australian study found that about one in seven respondents used AI as a personal therapist. Another analysis found that therapy and companionship is now, in 2025, the most common use of artificial intelligence, bumping out the generation of new ideas. And the most common users are millennials, like me. What exactly everyone is saying to AI, though, is private, and even the best evidence comes from sifting through anecdote.
To colour in this hazy sketch, I reached out on social media for personal stories about using AI for companionship and heard back immediately—a friend of a friend, an old classmate, a neighbour. “It didn’t get frustrated when I needed to repeat myself or hear something repeatedly or reframe something a million times,” one person told me by text. “It didn’t constantly remind me that my personhood was ‘too much,’ like so many real people did at the time.”
The news is full of extreme cases of AI users spiralling into fantasy, taking chatbots as romantic or sexual partners, developing deep emotional bonds, or imagining themselves collaborating with AI on world-changing new scientific discoveries. These cases are obviously very real, although it’s not clear how common. But the people I spoke to were closer to earth, understanding AI as a tool, something more than a journal but less than a real person.
“When I’ve gone to AI, it’s usually after I’ve gone to my friends,” another contact told me in a voice note. “It’s less about even the response, and more about being able to type something out and hit send, without necessarily the fallout of a dramatic text message or an email.… And so it’s more taking it into the solitary, obsessive aspect of vomiting it out.”
It’s this kind of solitary conversation that I find the most interesting about our relationships with artificial intelligence—not the larger-than-life stories of psychosis and delusion. All of a sudden, everyone has a very human-feeling conversation partner in their back pocket. It’s available all the time, a bottomless well for a very basic human need: being heard.
That so many people are having these conversations gives me a sort of technological vertigo, as we hurtle semiconscious toward a world very different socially, cognitively, and emotionally than what came before. I’ve been talking with a range of experts and professionals in mental health about artificial intelligence, and like me, they feel both trepidation and hope about what this new technology is going to mean for our inner lives, and how we relate to each other.
That’s the other thing that’s brought me to Robson Square. The shuffler in the black sweatband is also a newly minted researcher in the field of artificial intelligence and mental health at Simon Fraser University. Dancing, she tells me, helps break up the exertion of writing, reading, and conferences.
Zoha Khawaja is so fresh out of the academic box that, when I first call her on the phone to talk about her master’s thesis, she doesn’t realize it has even been published. Young, energetic, and optimistic, she sees AI as full of possibility and jeopardy.
“Unfortunately, what happens is that these non-clinically validated tools are already on the market, and people are not able to decipher between a good-quality tool and one that is not very helpful,” she says. “That’s where things can get very muddy, and things can go really wrong.”
In her research, Khawaja surveyed mental health professionals in an effort to understand this kind of pitfall. Experts in the field are hopeful about the possibilities of carefully constructed, scientifically tested AI tools used in tandem with trained therapists and psychiatrists, she says. But that isn’t necessarily what’s happening. “I’ve had friends who come up to me and say that they’ve asked ChatGPT to be a trauma therapist,” she says. “I say, ‘Yeah, but it’s not meant to be that. It’s meant to keep its users using it.’
AI’s ease and lack of friction are its selling points, but they are also liabilities when it comes to genuine self-reflection.
“We don’t know the long-term social impact of these tools. We’re only going to see the long-term repercussions later. We’re going to start treating these AI as friends and companions instead of real-life people. We’re going to have issues with how we create our relational autonomy. We’re not going to talk to our therapists. And our society is going to get more and more isolated and lonely.”
This feeling of ill portent is common among the mental health workers I spoke to. Geo Mclean is a young mental health clinician, just setting up his counselling practice in Vancouver. He says he’s had to think hard about how to develop his skillset so that he won’t be pushed aside by a wave of easy-to-access AI tools. For him, that means developing the kind of deep relationships and attention to physical cues and bodily experience that aren’t easily mimicked by AI. It’s not, he says, that chatbots can really replace a skilled therapist, but that they are so much easier to use.
“There’s a real activation energy in order to get help. With AI, the activation energy is super low. It’s easy to access, low vulnerability, but that also means people aren’t seeking out other kinds of connection,” he says. Vulnerability, he notes, is a skill. If people aren’t learning how to be vulnerable, they won’t be able to connect with people. The question for Mclean is: Do we care more about feeling good, or being in the world?
I heard the same concern from Cynthia Farnsworth, a veteran clinical counsellor who has been working in Vancouver for 30 years. AI’s ease and lack of friction are its selling points, but they are also liabilities when it comes to genuine self-reflection.
“I think there’s a really important need in the world right now to manage our immediate reactions and manage our distress tolerance. To sit with our emotions and figure out how to respond,” she says. “There are real risks to the way AI therapists don’t challenge the client. It’s not built for that. It doesn’t challenge a set of ideologies or a way of thinking about things.”
If you’re optimistic about artificial intelligence, it’s easy to dismiss these kinds of worries as the predictable gripes of a profession on the eve of technological revolution. Scribes complain about the printing press; ditchdiggers, about the backhoe. But aside from worry about their work, which they see as essential to helping people deal with very real problems from anxiety to suicidality, the professionals I spoke to had a sense that things are getting out of hand very quickly. The first publicly available version of ChatGPT took the world by surprise barely three years ago. As I write this article, a company has plastered the New York subway with ads for an AI pendant called Friend that hangs around your neck as a constant companion. It’s impossible to say how far things will have moved in the weeks it will take to edit, design, and publish this article.
“Things are evolving much more rapidly than the field is really able to grapple with,” says John Ogrodniczuk, a professor of psychiatry at the University of British Columbia. He studies loneliness, a condition for which constant AI companionship seems like the ultimate cure. But to escape loneliness, he points out, you have to experience it. Chatbots, following social media, smartphones, and other kinds of digital media, fill in the empty spaces and get in the way of our natural impulses.
“I see AI being part of the spectrum of digital tools that keep people away from solitude and what can be beneficial from it,” he says. “What is hunger? It’s a signal that we’re missing something—food, nutrients. It signals to us that we need to do something to satisfy that need. Loneliness can be thought of in a similar kind of way. It’s signalling to us that there’s a need that’s not being met—a need for social stimulation, but more importantly, a sense of belonging, the feeling that you matter to others.”
If you feel pessimistic about artificial intelligence, it might also be tempting to come down hard on the technology. It’s easy to find accusations that AI is useless, plagiaristic, immoral, or environmentally destructive. But if you’re worried about how the technology might isolate us, this too has its dangers.
“I think it’s increasingly difficult and divisive to talk to people about,” one person who uses AI as a therapy tool wrote to me. “I’ve taken to avoiding conversations due to the sheer misunderstandings.… I often feel belittled or undermined because people seem not to believe I personally have the skills to discern what is helpful vs unhelpful input from AI.”
“I don’t generally think it’s something I can talk to other people about,” another said. “Mostly because it seems kind of gross and kind of weird. Like, oh, you’re talking to a computer.”
If we are unwilling to acknowledge that the people around us are using AI, says Port Moody social worker Roxanna Farnsworth (no relation to Cynthia), they will only be forced deeper into hiding. “They’re ashamed—100 per cent. They’re ashamed. They’re embarrassed,” she says. “They don’t feel like they can talk about that to anyone else, so they’re talking to a computer.”
It’s important, she says, that therapists be curious about why their clients are using AI, even if they have qualms. “It doesn’t matter if I’m a fan. I have people that I care about that are leaning on this tool, so I need to understand: Why are they using it? How are they using it?”
This, too, is Khawaja’s attitude—a balance between concern about the future of mental health work in the age of artificial intelligence and sincere curiosity about where it will lead. At Robson Square, I point out the contrast I see between her work with AI and the physical, communal, social quality of her dancing—all moving bodies and synchronization to the beat: the opposite of loneliness. But she challenges my framing. After all, she points out, she started shuffle dancing by watching people online. It’s not the technology that makes us lonely, but where we let it take us.
We can make this choice with AI too, she says. “People ask me if this is something we should do. And I tell people the same thing. It doesn’t matter what I think should or shouldn’t happen. At the end of the day, it’s happening, whether we like it or not. And I’m trying to put forward how it can happen in a safe and responsible and ethical way.”
Read more from our Winter 2025 issue.