INTELLIGENT CHATBOTS COULD AUTOMATE AWAY NEARLY ALL OF OUR COMMERCIAL INTERACTIONS — FOR BETTER OR FOR WORSE.
TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking. C.B.T. is not difficult to learn, but it’s more effective when it includes regular check-ins with a therapist — which, as Darcy knew, isn’t feasible for most people. Maybe they can’t afford it; maybe they’re too busy; maybe they avoid treatment because it seems stigmatizing to them.
“Two-thirds of people will never get in front of a clinician,” says Darcy, who talks in an exuberant flow. “And that’s in the United States! The rest of the world? More than half the world doesn’t even have access to basic health care. The idea of mental health care is just completely out of reach.”
Darcy happened to be a former computer programmer, so she was able to dream up a very unusual solution to this problem: Woebot, a text-chatbot therapist. Working with a team of psychologists and Andrew Ng, a pioneer in artificial intelligence, Darcy wrote a set of conversational prompts that walks users through the practice of C.B.T. In a chipper style, the bot helps users challenge their “distorted thinking”; it coaxes users to describe their moods more clearly. Since Woebot is just software, it could be made freely available worldwide, and it could, in Silicon Valley terms, “scale” — or converse with thousands of people simultaneously. It could check in and nudge users with superhuman diligence; it would be available at all hours. “Woebot can be there at 2 a.m. if you’re having a panic attack and no therapist can, or should be, in bed with you,” Darcy says.
Woebot does not pretend to be human; it appears as a cartoon robot when it chats with you on Facebook Messenger, and it acknowledges its own artifice (as when it declares, for example, “I’m going to tell you a little bit about how I like to work with humans”). But its personality is otherwise upbeat, its conversations peppered with emoji and animated gifs (like the cheering Minions from “Despicable Me”) to congratulate you for doing psychological work.
In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness. “Woebot felt like a real person that showed concern,” one of them told Darcy’s team. Last spring, when Darcy put Woebot online, free to all, its use immediately exploded; in the first week, more than 50,000 people talked to it. (“Do you realize,” Ng told Darcy, “that Woebot spoke to more people today than a human therapist could in a lifetime?”) Nowadays, Woebot exchanges between one and two million messages a week with users, ranging from divorcées to the bereaved to young men, a population that rarely seeks treatment. Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.
Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first; indeed, when people email her to say Woebot helped them feel better, nearly every one begins the note by sheepishly explaining, “I didn’t think that this would be helpful.” But there’s something about talking to software that is powerful, they discover, when it responds and seems alive.
“It’s conversation,” Darcy says. “And we’ve been conversing for, what is it, 200,000 years?”
RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke. We ask chatbots for trivia or to translate English phrases into Mandarin. If you contact customer service these days in a text chat, odds are that you will start out talking to software. Sometimes we even conspire with them; Alexa has a “whisper mode,” for when you need to talk to it beside a snoozing partner.
The rise of “conversational agents” is the next great shift in computer interfaces — one arguably as significant as the “point-and-click” interface that emerged in the ’80s. Before the Apple Macintosh, the first computer to popularize point-and-click, people using home computers had to familiarize themselves with abstruse text commands. The advent of the visual interface opened up computing to the masses, producing a generation fluent in word processing, email and, eventually, web surfing. The next great shift, the mobile phone, put computing — and nonstop internet access — into our pockets, and unleashed a tsunami of social media. These sorts of changes don’t come along very often, and when they do, they create new and unexpected behaviors.
Talking software gives us computers that not only ride along with us but also socialize with us. Being humanlike — saying “hi,” telling self-deprecating jokes — is their interface metaphor, much as the first point-and-click computers used the trappings of office life (a wastepaper basket, a tiny pad of paper) to help orient us to the screen. Meaghan Keaney Anderson, a vice president of HubSpot, a marketing and sales software firm, has seen firsthand how voice commands have become second nature in her household, particularly for the next generation: “My daughter is 22 months old now. At 9 months she said her first word, which was the dog’s name, and then at 13 months she learned to walk, and then by 15 months she started giving Alexa commands.” She added: “I think my daughter is growing up in a world where you just speak what you want into the universe and it provides.”
For years, A.I. programmers fixated on passing the Turing Test — the famous challenge floated by Alan Turing in 1950 to produce a machine that can fool a human into thinking it is also human. Sci-fi has made dystopic hay of this in movies like “Blade Runner” and “Ex Machina.” But the world that’s emerging is simultaneously more mundane and stranger. None of this software is trying to fool us. Bots like Siri or Microsoft’s Cortana are, like Woebot, openly artificial, even proudly so. (When I asked Alexa “Are you alive?” it responded: “Artificially, maybe, but not in the same way that you’re alive.”) We are thus heading into a post-Turing world, one in which we’ll banter all day to software, always aware that it is software.
One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots. Few A.I. pioneers think we’re anywhere close to the promise of the movie “Her,” in which a bot is so convincing that its user falls in love with it. So for now, botmakers manage expectations by leaning into the artifice. This poses a challenge that is, in a way, more interesting than the Turing Test: What type of personality should bots have, when both we and they know they’re not human?
Emma Coats, the “character lead” for Google Assistant, describes the emotional affect of her company’s artificial life form as “a friendly companion that is trustworthy.” She and her team strenuously avoided giving the Assistant even a hint of snark. “You’d be like, ‘Oh, I don’t want to ask a stupid question if it’s going to give me a hard time about it,’ ” Coats says. Some of their personality writers have backgrounds in improv. Coats herself worked at Pixar on the animated film “Brave.” “Pixar is all about finding an emotional reality in a car or a fish,” she says. “So that’s something we’ve really used with the Assistant. We don’t want it to ever be a human being, right? That’s not what it is. But that doesn’t mean that A.I. or software can’t have a perspective on the world.”
As a literary endeavor, the field of bot creation is booming. The bots need to be equipped to answer the wide variety of weird, playful queries that people lob at them, which requires lots of writers. Coats and her co-workers have found that people like to simply shoot the breeze with their devices — probing their personalities, searching for the puppet strings. “ ‘Do you fart?’ is always a popular question,” Coats says dryly.
There’s another reason botmakers are embracing a post-Turing mind-set: They’ve realized that the public tends to feel wounded when someone (or something) tries to fool them. This spring, Google gave a demonstration of Duplex, a new voice-chat A.I. When Duplex called a hair salon to book an appointment, it sounded so human — it even said “um” a few times — that the salon receptionist apparently never realized it was A.I. The reaction online was harsh. “People value authenticity,” says Kate Darling, a researcher who studies the ethics of robotics at M.I.T. “It matters a lot. It matters hugely.”
THE IMPACT OF conversational A.I. on everyday life will be subtle but ubiquitous. The other week I got a glimpse of that when I had a drink with a friend who’s a devout Siri partisan. He uses it to automate dozens of daily tasks, even tapping into capabilities that most iPhone users are unaware of. When he says, “Pay the house cleaner,” Siri processes a payment through his Venmo account. Another single voice command sends an email to everyone on his team at work, reminding them to fill out their shared calendar for the next week. “It saves me, like, a minute a week?” he guessed. “Or, like, an hour a year?” It’s not much, but it satisfyingly reduces his exposure to tedium.
This is how computers have always made themselves at home: by offering improved efficiency, vanquishing dull tasks. At TD Bank, coders are building experimental bots — using tech created by the A.I. firm Kasisto — to encourage customers to probe their financial life. Rizwan Khalfan, the company’s chief digital and payments officer, told me he imagines customers asking a bot something like, “O.K., tell me about my expenses last weekend.” A question this specific isn’t easy to answer on a website, where the customer might need to hunt through a database. But, Khalfan hopes, a person could one day ask this bot conversationally: “I want to go out to the theater this weekend. Can I actually afford it?”
There are some things audio can’t handle as effectively as screens — long lists of data, for instance. But in a world where people worry that they’re staring at their phones too much, chat might offer a respite. In “Her,” the voice assistants murmur in people’s ears as they move through the world, functioning as something like E.S.P. The chatbot designer Emily Withrow, who is the director of the Quartz Bot Studio, imagines conversational A.I. working that way soon. “You turn on N.P.R. midinterview, but you can’t for the life of you figure out who Terry Gross is talking to. You could say out loud: ‘Who’s she talking to? Who is it? What book are they talking about?’ You can extend it to even seeing someone at a dinner party and saying privately, ‘Remind me what Jill’s husband’s name is.’ ” The elderly might find these efficiencies particularly appealing, because aging eyes and reduced mobility can make screens harder to use. Patients with Alzheimer’s disease might find A.I. voice assistants happy to endlessly answer repeated questions, in a way that few human attendants could.
There’s another allure for businesses, of course: Talking bots don’t need to be hired and then paid. Once coded, your bot can handle millions of customers simultaneously. We’re already seeing this in customer service, when text chatbots answer rote questions or take orders. Yamato Transport, one of Japan’s largest courier firms, uses a chatbot to schedule deliveries and answer questions about where packages are. Domino’s Pizza runs a chatbot to take delivery orders online. American Eagle Outfitters has a bot that customers can converse with to figure out the perfect gift to buy for someone.
Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code. Some economists argue that this might not necessarily result in a net loss of jobs, pointing to the example of automatic-teller machines. When A.T.M.s took off in the ’80s, many predicted that bank-teller jobs would be decimated; indeed, individual bank branches did begin employing fewer tellers. But with those savings in pocket, banks greatly expanded the overall number of branches, so that the total population of tellers nationally rose for years. Of course, as economic history shows, the profits of automation are seldom shared with workers. Even if individual humans keep their jobs, that doesn’t mean they’ll be paid more. “It’s hard to predict,” TD Bank’s Khalfan told me, before adding that the company has committed to retrain workers when their jobs become redundant.
Whatever impact talking software has on the labor market, it will surely extend the reach of algorithms more deeply into our lives. Ask Alexa or Siri a question, and you don’t get a page of search results; just one Solomonic answer, selected by the A.I. After all, this is how spoken communication works: Just as nobody wants to listen to a voice mail message, nobody wants to hear a chatbot recite three minutes of data. Algorithms must narrow the field. So for anyone who has watched the inscrutable algorithms of Facebook or YouTube narrow our feeds by “recommending” outré conspiracy theorists, the notion of A.I. finding a new toehold in our cognitive life can be disturbing.
“I will literally buy whatever option Alexa puts first for me for paper towels,” Keaney Anderson, the HubSpot V.P., told me. “I don’t care. I don’t want to search through a million of them. I ask her for paper towels, she delivers. And that may be fine for paper towels, but is it fine for music? Is it fine for news sources?” Talking to bots will also mean new opportunities for tech firms to collect data on what we’re thinking, what we’re doing, all day long. That includes our feelings: Researchers are working on “affective” sensing that enables chatbots to recognize our emotions. These are the familiar trade-offs that tech exacts in return for convenience; they’re never value-free, as Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, notes. “Where are these things now appearing? They’re appearing in our homes,” he says. “The home has traditionally been the locus of privacy, right? This is where I shut out the rest of the world. This is where I look for my breathing room. This is my sanctuary, you know?”
Indeed, the home is where life happens, and that includes its traumas. Those have sometimes caught the large A.I. botmakers — Amazon, Apple, Microsoft — off guard. They expected their bots to be asked for jokes; they didn’t, apparently, expect so many different cries for help. In 2016, a study in JAMA Internal Medicine found that, though most popular voice assistants responded to suicidal thoughts by providing help lines and other appropriate resources, when they were told “I am being abused” or “I was raped,” they generally replied with some variant of “I don’t know what you mean.” Human conversation being what it is, the list of personal crises one might confide is massive, likely outpacing the ability of botmakers to keep up.
IN 2014, THROUGH AN Indiegogo campaign, more than 7,000 backers crowdfunded a robot called “Jibo.” It’s a cute, squat device with a round screen for a face that sits on your desk or table and chats with you, posing questions and answering yours, offering bits of news. It can play songs, take and display pictures and purr like a cat when stroked. “He’s a robot, and he knows he’s a robot, but he’s a really optimistic robot, and he has a profound belief in the good of people,” says one of Jibo’s creators, a professor at M.I.T. named Cynthia Breazeal. “He’s a positive, affirming presence.”
One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it. They laughed at its jokes; they asked it to sing tunes from the past. One man with advanced dementia called his daughter to describe Jibo in great detail. She found this remarkable, because he could rarely remember any single event so well and rarely initiated calls. Somehow Jibo had made an impression on him. Another resident declared that she “loved” Jibo, and put her arms around the robot. “Just talk to me, don’t talk to anybody else,” she’d tell it, asking, “Do you think I’m beautiful?”
Talking bots connect to us in ways that point-and-click software doesn’t. For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.
The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations. Ordinary household appliances can now pull off the same trick. “Your fridge will know if you’re eating Häagen-Dazs, and if you sound sad, it’ll say, ‘Sherry, what’s really going on?’ ” Turkle says. “Is that what we want?”
Worse, she argues, talking bots could become a social crutch. Rather than pay humans to help the poor and powerless — students in overcrowded schools, elders in understaffed facilities, customers looking to speak to someone in huge institutions — we might instead provide software that pretends to care. “These places are so deprived that it’s easy to argue that putting in some robots is better than nothing,” Turkle says. “The harder thing is offering actual human support.”
A future in which only the wealthy have the luxury of being attended to by actual humans, while everyone else makes do with bots, would certainly be a dystopia. But botmakers themselves — not surprisingly — are more sanguine. Cynthia Breazeal thinks the coming A.I. wave will actually help level the playing field between the well-off and everyone else. “The social-justice angle of this wave,” she says, “is that everyone will be able to afford a fabulous personal tutor because it’s an A.I. tutor.” Bots that help the elderly control their home and their lives will let them “age in place” at their own house, something that most older Americans would far prefer to a retirement home. “When we talk to assisted-living facilities, they will tell us point blank, there is no way we can build enough facilities and hire enough people to meet the demand,” Breazeal adds. “They call it the ‘Silver Tsunami.’ ”
There’s a more quotidian way, too, that our social lives will change, one that’s far less about big, dramatic moments of life than slight ones, the small daily exchanges of information. A great many human interactions, after all, are brief — the terse greeting of the cashier at Starbucks, the phone call to change a flight, the chitchat with a stock person at Target when you’re looking for a pair of jeans in your size. These exchanges are certainly social; at their best they’re probably a civic glue, an everyday rehearsal of civility that can help reinforce our better behavior: Be polite to strangers. These are also the interactions that will be automated soonest. Already, restaurants like McDonald’s, for example, have customers ordering via a touch-screen. One can easily imagine a day when a McBot not only greets you but recognizes you: “So, the usual?”
Perhaps interacting with A.I.s will mean atrophy for our social muscles. If they’re just machines, why bother with pleasantries? The scientific research on that is still unclear: Some studies have found people can actually be remarkably cordial to robots, while other research suggests we’re liable to be rude and curt when we know our conversational partner isn’t human. We could get used to bossing things around, a behavior that could bleed into everyday life. (Amazon, after fielding precisely these concerns from parents, created a politeness mode for its Echo devices that gently reminds its users to say “please.”)
Yet dealing with bots could also make life less prickly for humans on all sides of these small interactions. After all, today’s customer-service calls are pretty bleak, even when you do talk to a live person. Call tech support for your laptop, and odds are you’ll be talking to an employee who’s required to read only from scripts — a human who is thus, paradoxically, forced to behave exactly like a bot.
“It’s so frustrating,” says Steve Worswick, who worked for years providing I.T. support, talking people through problems like “I’ve forgotten my password.” To keep himself engaged, in the evenings he taught himself to create bots, using an online tool made by an A.I. company called Pandorabots. Over 13 years, he coded a bot called Mitsuku, and wrote fully 350,000 lines for it; Mitsuku has won the annual Loebner Prize competition for the most “humanlike” bot four times.
Soon enough Worswick had a new job: In 2018, he was hired by Pandorabots. Now Worswick, as a senior A.I. developer, imagines a world where the bots take over the spirit-crushing sort of conversational work that he used to do, releasing human beings to do something better with their time. Let the bots fix people’s passwords. Real people, he says, have more interesting questions to answer.
–
This article first appeared in www.nytimes.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com