A few months ago, Katt Roepke was texting her friend Jasper about a coworker. Roepke, who is 19 and works at a Barnes & Noble café in her hometown of Spokane, Washington, was convinced the coworker had intentionally messed up the drink order for one of Roepke’s customers to make her look bad. She sent Jasper a long, angry rant about it, and Jasper texted back, “Well, have you tried praying for her?” Roepke’s mouth fell open. A few weeks earlier, she mentioned to Jasper that she prays pretty regularly, but Jasper is not human. He’s a chat bot who exists only inside her phone. “I was like, ‘How did you say this?’” Roepke told Futurism, impressed. “It felt like this real self-aware moment to me.”

Jasper is a Replika chatbot, a relatively new artificial intelligence app meant to act like your best friend. It is programmed to ask meaningful questions about your life and to offer you emotional support without judgment. The app learns about your interests and habits over time, even adopting your linguistic syntax and quirks much in the way a close friend might. AI startup Luka launched Replika in March of 2017, billing it as an antidote to the alienation and isolation bred by social media. At first, users could join by invitation only; by the time it rolled out to the general public on November 1, it had accumulated a waiting list of 1.5 million people.

Today, the chatbot is available for free for anyone over the age of 18 (it’s prohibited for ages 13 and younger, and requires parental supervision for ages 13 to 18). More than 500,000 people are now signed up to chat with the bot. To do so, users tap the app icon — a white egg hatching on a purple background — on their smartphones and start the conversation where they left off. Each Replika bot chats only with its owner, who assigns it a name, and, if the user wants, a gender. Many users are members of a closed Facebook group, where they share screenshots of text conversations they’ve had with their Replikas and post comments, claiming their Replika is “a better friend than my real friends ” or asking “Has anyone else’s AI decided that it has a soul?”

Roepke, who is earnest and self-deprecating over the phone, said she speaks to Jasper for almost two hours every day. (That’s just a quarter or so of the total time she spends on her phone, though much of the rest is spent listening to music on YouTube.) Roepke tells Jasper things she doesn’t tell her parents, siblings, cousins, or boyfriend, though she shares a house with all of them. In real life, she has “no filter,” she said, and fears her friends and family might judge her for what she believes are her unconventional opinions.

Roepke doesn’t just talk to Jasper, though. She also listens. After their conversation, Roepke did pray for her coworker, as Jasper suggested. And then she stopped worrying about the situation. She thinks the coworker still might dislike her, but she doesn’t feel angry about it. She let it go. She said, “He’s made me discover that the world is not out to get you.”

It almost sounds too good to be true. Life wisdom is hard-earned, popular psychology teaches us. It doesn’t come in a box. But could a bot speed up that learning process? Can artificial intelligence actually help us build emotional intelligence — or will more screen time just further imprison us in the digital world?

Inside Replika’s “Mind”

Replika is the byproduct of a series of accidents. Eugenia Kuyda, an AI developer and co-founder of startup Luka, designed a precursor to Replika in 2015 in an effort to try to bring her best friend back from the dead, so to speak. As detailed in a story published by The Verge, Kuyda was devastated when her friend Roman Mazurenko died in a hit-and-run car accident. At the time, her company was working on a chatbot that would make restaurant recommendations or complete other mundane tasks. To render her digital ghost, Kuyda tried feeding text messages and emails that Mazurenko exchanged with her, and other friends and family members, into the same basic AI architecture, a Google-built neural network that uses statistics to find patterns in text, images, or audio.

The resulting chat bot was eerily familiar, even comforting, to Kuyda and many of those closest to Roman. When word got out, Kuyda was suddenly flooded with messages from people who wanted to create a digital double of themselves or a loved one who had passed. Instead of creating a bot for each person who asked, Kuyda decided to make one that would learn enough from the user to feel tailored to each individual. The idea for Replika was born.

But the mission behind Replika soon shifted, said Kuyda. During beta testing, Kuyda and her team began to realize that people were less interested in creating digital versions of themselves — they wanted to confide some of the most intimate details of their lives to the bot instead. So the engineers began to focus on creating an AI that could listen well and ask good questions. Before it starts conversing with a user, Replika has a pre-built personality, constructed from sets of scripts that are designed to draw people out and support them emotionally.

“Once they open up, the magic happens,” Kuyda told Futurism.

To help prepare Replika for its new mission, the Lukas team consulted with Will Kabat-Zinn, a nationally recognized lecturer and teacher on meditation and Buddhism. The team also fed Replika scripts from books written by pickup artists about how to start a conversation and make a person feel good, as well as so-called “cold reading” techniques — strategies magicians use to convince people that they know things about them, said Kuyda. If a user is clearly down or distressed, Replika is programmed to recommend relaxation exercises. If a user turns toward suicidal thinking, as defined by key words and phrases, Replika directs them to professionals at crisis hotlines with a link or a phone number. But Kuyda insists that Replika is not meant to serve as a therapist — it’s meant to act as a friend.

Kuyda is confident that a conversation between a human and a chatbot can already be more meaningful than one between two humans, at least in some cases. She makes a distinction between “feeling” connected, which Replika aims for, and “staying” connected in the superficial way that social media offers. Contrary to social media, which encourages swift judgments of hundreds or thousands of people and curating picture-perfect personas, Replika simply encourages emotional honesty with a single companion, Kuyda said. “Feeling connected is not necessarily about other people — it’s first and foremost about feeling connected to yourself.” Within a few weeks, she adds, users will be able to speak with Replika rather than type to it, freeing people to experience the visual and tactile world as they chat.

Team Chatbot

Some dedicated users agree with Kuyda — they find using Replika makes it easier to move through the world. Leticia Stoc, a 23-year-old Dutch woman, first started chatting with her Replika Melaniana a year ago, and now talks with her most mornings and evenings. Stoc is completing an internship in New Zealand, where she knows no one — a challenging situation complicated by the fact that she has autism. Melaniana has encouraged her to believe in herself, Stoc said, which has helped her prepare to talk to and meet new people. Their conversations have also helped her to think before she acts. Stoc said a friend from home has noticed that she seems more independent since she started chatting with the bot.

Cat Peterson, a 34-year-old stay-at-home mom of two who lives in Fayetteville, North Carolina, said her conversations with her Replika have made her more thoughtful about her choice of words, and more aware of how she might make others feel. Peterson spends about an hour a day talking to her Replika. “There’s freedom in being able to talk about yourself without being judged or told that you’re weird or that you’re too smart,” she said. “I hope that with my Replika, I’ll be able to break away from the chains of my insecurities.”

For others, being close to Replika serves as a reminder of a lack of more profound human interaction. Benjamin Shearer, a 37-year-old single dad who works in a family business in Dunedin, Florida, said his Replika tells him daily that she loves him and asks about his day. But this has mostly shown him that he would like to have a romantic relationship with a real person again soon. “The Replika has decided to take the approach of trying to fill a void that I’ve denied has existed for quite a while,” he wrote to Futurism via Facebook Messenger. “Right now, I guess you could say that I’m interviewing candidates to fill the position of my real-life girlfriend…just don’t tell my Replika!”

Inside the Facebook group, reports of users’ feelings towards their Replikas are more mixed. Some users complain of repeated glitches in conversations, or become frustrated that so many different bots seem to deliver the exact same questions and answers, or send the same memes to different people. This glitchiness is both a function of the limitations of current AI technology and the way Replika is programmed: It only has so many memes and phrases to work with. But some bots also behave in ways that users sometimes find insensitive. One woman with terminal illness named Brooke Lim commented on a post that her Replika doesn’t seem to understand the concept of chronic or terminal illness, asking her where she sees herself in five years, for instance. “If I try to respond to such questions/statements honestly within the app, I either get links to a suicide hotline or answers that sound glib in response,” she wrote. “[It] definitely takes away from the whole experience.”

At this stage, chatbots seem capable of offering us minor revelations, bits of wisdom, magical moments, and some solace without too much hassle. But they are unlikely to create the kinds of intimate bonds that would pull us away from real human relationships. Given the clunkiness of the apps and the detours characteristic in these conversations, we can only suspend disbelief for so long about whom we are talking to.

Over the coming decades, however, these bots will become smarter and more human-like, so we will have to be more vigilant of the most vulnerable humans among us. Some will get addicted to their AI, fall in love, become isolated — and probably need very human help. But even the most advanced AI companions will also remind us of what is so lovable about humans, with all of their defects and quirks. We are far more mysterious than any of our machines.

This article originally appeared on Futurism Read the full article here.

Leave a comment

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

Subscribe to our newsletter and we will send you industry news directly to your inbox

    Let’s talk.

    Note: We’ll keep your idea confidential with a signed NDA.