Virtual Therapy: Why It's So Comforting to Talk to Screens That Listen

Roisin Kiberd

Roisin Kiberd

From ELIZA chatbots to modern-day diagnostic AIs, robot interactions don't have to be cold.

Image: Daniel Foster/Flickr

I find myself talking to computers a lot lately. Not Siri–I don't trust her yet.

Instead I speak into my laptop to dictate instead of typing, copying out notes and transcribing interviews late at night when I feel lazy and there's no one around to hear.

There's something very comforting about talking to a screen and seeing that it's listening. You start to enjoy the sound of your own voice, keeping it measured and evenly paced, just expressive enough that the program catches every word. It becomes an oddly therapeutic exercise, giving you space to think aloud. I could be dictating this article right now.

And I'm not alone: We've been confiding in computers since when they were too big to keep in our homes. ELIZA, the original chatbot program, was created in the 1960s by Joseph Weizenbaum, a computer scientist and professor at MIT. ELIZA was originally intended as an experiment in natural language processing, using pattern-matching techniques and a script that parodied Rogerian therapists to simulate human conversation.

Example of an ELIZA bot conversation. Image: Ysangkok/WIkimedia

But humans came to rely on her simple, repetitive questions almost instantly: One famous anecdote, repeated by Weizenbaum in the 2010 documentary Plug and Pray, sees him walking in on his secretary confiding in ELIZA. She subsequently asks him to leave, saying she feels uncomfortable continuing with another human in the room.

If ELIZA gave voice to machines, she also revealed them to be indifferent. Talk to her today and she seems cold; inhuman even by robot standards. She's a good listener but tone-deaf, a kind of passive, unchallenging starter girlfriend of a robot.

Indeed, the most uncomfortable part of speaking to ELIZA is that she reveals how robotic our own interactions can be—the autopilot chats we carry out between tabs at work, the PUA-scripted process of flirting. It seems little coincidence a program called "Virtual Woman" claims to have been one of the first animated chatbots in existence, and that ELIZA's latter-day inheritors are "Svetlana bots," which colonise dating websites to seduce lonely men, or the female Tinder bot who surfaced in a recent campaign for the film Ex Machina.

But aside from supplying a quasi-human persona to spam and girlfriend simulators, ELIZA-influenced bots haven't completely disappeared from psychotherapy. Based at the University of California's Institute for Creative Technologies, SimSensei is a therapeutic bot project, still in trial phase, which aims to supplement human therapy. Created as a "clinical decision support tool," SimSensei's virtual on-screen therapist interacts with the patient and measures their distress, building up a profile over multiple sessions and detecting signs of depression, anxiety, or PTSD (SimSensei is currently being trialled with US military personnel).

I spoke to one of SimSensei's project leaders, Albert "Skip" Rizzo, over Skype. He explained that the platform functions for psychotherapists as a diagnostic tool and for patients as a virtual sounding-board. "We've found people feel more comfortable talking to software, without a human in the room. They feel less judged—you're aware it's artificial, but it's like talking to yourself and getting an answer back," he said.

I asked Rizzo whether the success of SimSensei might reflect a change in the human mind in response to technology, a new-found comfort confiding in screens rather than people. He responded, "I think we've always had that. I mean, do you talk to your cat? There's something innately helpful about talking to ourselves: It's an important part of well-being. And we're always talking to ourselves in our thoughts. Psychologists used to study how when people think, there are distinct muscles which fire up around the voice box. They called it 'subvocal speech.'"

In current trials, patients are first put through a full psychiatric assessment to establish a baseline measure of their mental state, then SimSensei is used to conduct an initial 120-minute interview before they meet a human therapist. It is used again mid-way through treatment, and a final time at its end. The virtual therapist detects signs of psychological distress in conversations through a microphone as well as visual and motion sensors. Common behavioural traits have emerged in patients during the trials, which Rizzo has found to be telling. "You see more looking down, smiles of a shorter and less robust nature, patterns in their words...," he said.

The SimSensei virtual therapist on the right and the patient on the left, with her facial movements detected. Image: USC Institute for Creative Technologies

The virtual therapist can be alert to physical signs that a real one might not register: Rizzo noted, "Human therapists get fatigued, they can get bored and less perceptive. I don't think it's a replacement for all the good things that human therapists do, but I think it can extend their capabilities… We're living in a mediated world in a lot of ways, and some people would say that's bad. But when you interact with a digital character you use all the same parts of your brain as when you talk to a real person, but with none of the fear of them judging you."

Rizzo remains an optimist, likening concerns over SimSensei to the media scaremongering which once surrounded Second Life, the virtual world which loomed large in the public imagination back in the early 2000s when he was first exploring ideas of computer therapy. "People were up in arms about it, saying 'What about people with autism?' But you're giving them the opportunity to interact with representations of people, and behind those avatars there are real people, where before they might never have spoken to anyone."

It's hard to imagine battle-hardened soldiers engaging with an animated shrink nodding calmly in a virtual chair, but over 700 subjects have passed through Rizzo's trials so far, and he is hopeful that the platform will continue to evolve, and eventually roll out for wider use as a remote diagnostic aid for therapists. SimSensei might eventually fill the gap between seeking help and suffering alone in silence. "It can be down to lack of finances, or accessibility, or unawareness, or just the stigma attached to going and seeing someone," Rizzo said of the latter. "SimSensei isn't going to solve everyone's problems, but it can start a conversation then let a human therapist take over later."

We're used to talking to our laptops now: One study last year suggested that we now spend more time using technology than we do sleeping.

It's interesting to consider the overlap between automated scripts and the compulsions, tics and tendencies which mental illness can cause in human behaviour. Illness, in a sense, can follow its own script, and perhaps this is where robots will play a role in diagnosing us and nursing us back to our human selves.

The Paro seal robot. Image: Paul Allais/Flickr

Bot therapy is not without its ethical quandaries. When it debuted in 2001, PARO, an interactive robotic seal developed by AIST in Japan to comfort dementia sufferers, prompted ethical debate and even a parody appearance as a "robopet" on an episode of The Simpsons. A model chosen because few patients were likely to have bad memories involving baby seals, PARO makes high-pitched mewing sounds, responding to touch by moving its head and tail. It also has antiseptic fur, and recharges via a pacifier plugged into its mouth. PARO is a "carebot" used in homes for the elderly, a kind of therapeutic Furby which in videos is at once moving and troubling. It serves both as healing and maintenance tool, implying by its very existence that there is a limit to human care.

But we're used to talking to our laptops now: One study last year suggested that we now spend more time using technology than we do sleeping. "We have to recognize that psychology as a science has only been around 130 years now," Rizzo said. "And it has done a pretty fair and comprehensive job studying human behaviour in the real world. But now we have the digital world, and we're going to need almost as much time to understand how humans interact with that."

ELIZA creator Weizenbaum, who died in 2008, remarked in Plug and Pray that, "We must lose our reverence for life before any real progress can be made with AI." But perhaps the two are not mutually exclusive. Perhaps in the future psychological diagnosis could resemble a Turing Test in reverse: One where the machine tests us, and reminds us how to be human.