A FEW DAYS ago, I finally bought a pair of AirPods. Apple’s funny-looking ear-computers have been available for about a year, provided you were willing to order them online and wait six weeks for delivery. But last week, they started appearing in brick-and-mortar shops. When I saw that I could snag a pair at my local Apple store, I decided to take the plunge.
I worry that makes me sound like an impulsive person. I’m not. At $159, AirPods are expensive. It takes more than the launch hype of a mediocre-sounding, awkwardly fitting pair of ear-dongles for me to part with that kind of cash.
So why part with it? In short, last week saw the debut of several products that make AirPods a lot more interesting to anyone curious about the way humanity will interact with gadgets in the future: powerful new iPhones, an LTE-enabled Apple Watch, and three upgraded operating systems, all of which have been overhauled to emphasize Siri, Apple’s virtual assistant. Coupled with AirPods, these products constitute the first cohesive ecosystem designed for portable, personal, conversational computing—one that could help reclaim humanity’s fingers and eyeballs from the screens that dominate their lives.
That ecosystem brings us all a step closer to the world depicted in the movie Her, in which characters like Theo Twombly, played by Joaquin Phoenix, commune with their virtual assistants in powerful and productive ways, effortlessly, through speech. The film depicts AI as hyper-useful and minimally invasive, integrating seamlessly with users’ lives throughout the day to help them get shit done. Which, funny enough, is exactly how Apple depicts Siri in this recent ad, which features Dwayne Johnson using Siri to check his calendar appointments while pruning a bonsai, and listen to his email while painting the Sistine Chapel, among other things:
In reality, that system needs work; Siri still disappoints more often than it delights. But under tightly controlled conditions, it is now possible to visit the conversational future we’ve all been waiting for. The good news is that Siri sounds more human than ever and it’s not creepy at all.
The bad news is that the capital-d Dream of a virtual assistant that manages your digital life while you live your real one is probably a lie. The real problem with voice assistants isn’t that they’re underpowered, or that their neural nets aren’t sophisticated enough to intuit our requests. It’s that user interfaces will always demand your attention—whether they’re graphical, conversational, or, hell, telepathic.
I KNOW THIS because for the past week, I’ve been using my AirPods to interact with Siri. Not to create timers, launch apps, or add things to my shopping list, but to, you know, get shit done.
In the morning, I slip an AirPod into my ear (just one), double tap it, and ask Siri to read me my emails while I make breakfast, recite the day’s schedule while I put away dishes, organize my to-do list while I feed the dog, or help me field and respond to text messages as I pack up my bag and walk to the bus stop. Siri’s voice recognition is now strong enough, its neural nets sharp enough, and its access to my personal information complete enough to handle this small handful of tasks quickly and consistently.
The dynamics of our conversations vary: When Siri rattles off emails, it does all the talking. When I’m adding chores and deadlines to my to-do list, I dominate the conversation. And when Siri reads me my text messages, it also asks me if I’d like to dictate replies, so there’s a lot of back and forth. Of all the things I can use Siri to accomplish on my phone, fielding texts is the task that feels most futuristic. Not even The Rock volleys with Siri.
But I am not Dwayne Johnson, and I am definitely not Joaquin Phoenix. The former uses his virtual assistant to dominate his day with conviction, the latter with intuitive ease. Me? I spend my mornings with Siri blundering around my apartment, as I struggle to divide my attention between my digital and physical lives.
While listening intently to an email from a coworker, I accidentally grab the balsamic vinegar instead of the olive oil while preparing breakfast. While organizing my backpack for work, I pause to reply to a text message and struggle to recall what I wanted to say. While feeding the dog, I realize I’ve retained exactly zero information about the last several items on my day’s to-do list.
When I relate my experience to psychologist Harold Pashler, director of the UC San Diego Learning Attention and Perception Lab, he’s amused but unsurprised. “We see this kind of interference in even the simplest tasks we create in the lab. These kinds of cognitive limits are unavoidable and unmodifiable.”
For all its processing power, there are many tasks your brain prefers to handle one at a time. “Cognitive psychology research has shown that there is almost always a performance cost from multi-tasking,” says Fontbonne University psychologist Jason R. Finley, who studies the relationship between technology and human cognition. Even when it seems like we’re able to do two tasks at once, we’re likely only shifting the focus of our attention rapidly back and forth between the two tasks, and that comes with a cost to speed and accuracy.”
Some things are easier to do simultaneously than others. “There are modality effects, such that two auditory tasks will interfere with each other more than an auditory and a visual task. So listening to Siri while doing the dishes wouldn’t be as distracting as listening to Siri while also trying to listen to the radio,” Finley says.
Our brains seem to have an especially hard time selecting responses simultaneously. “So something like choosing which of the condiments in your pantry you’re going to grab, or making some choice about what you’re going to say or how you’re going to say it. Even if those things are simple and quite well-practiced, while your brain is doing one of them, it won’t do the other,” Pashler says.
Some of the strongest evidence of our inability to mix auditory processing with daily tasks comes from studies of distracted driving—and it’s basically terrible news all the way down. As you probably know, talking on the phone while driving significantly increases your risk of crashing. Like, by a factor of four. What you probably don’t know is that most studies suggest you’re at greater risk of crashing even when you talk hands-free, and practicing day in, day out doesn’t seem to make a meaningful difference.
Phone conversations induce a form of what psychologists call inattentional blindness—when you become so preoccupied with one task that you fail to notice other stimuli. Probably the most famous example of inattentional blindness is the invisible gorilla experiment, in which researchers asked participants to watch a video of people passing a basketball and count the number of passes made by the ones wearing white. In the middle of the video, somebody walks into the frame wearing a gorilla suit, does a little dance, and exits stage right. A shocking number of people fail to notice the gorilla.
Life is full of decisions and distractions. Walking around your house, getting ready for the day, navigating a city—living your life, basically—imposes frequent and intensive demands on your attention, Pashler says. We’re accustomed to our smartphones competing for that attention in discrete chunks—and as countless texting-while-walking videos have shown, it is impossible to look at your phone and not look at your phone at the same time.
Interacting with a conversational interface might tempt you to do other things simultaneously, but just because your eyes are free doesn’t mean your brain is. “Yes, you’re doing other things at the same time, but there’s a cost to that,” Pashler says. A conversational interface spreads that cost out over time. “It’s competing for little moments of time, ideally when you have adequate free resources, but it will also gobble up resources when you need them for something else. And at times like that, it will probably diminish the thoughtfulness with which you do those other things.”
“It’s funny,” Pashler says. “I pay no attention to this literature on mindfulness. I barely know what it is, and I have a sort of hidden contempt for it. But I do feel like multitasking is the opposite of mindfulness, isn’t it? I mean, you can often successfully combine two activities, but you do it at the expense of your experience of each. It’s like you weren’t fully there, for either activity. If you’re listening to your email while making your breakfast, there’s a cost there.”
He’s right. My mornings this past week have been riddled with mishaps and miscalculations—probably more than I realize. Interacting with Siri is nothing like passively listening to a podcast or radio program. And remember: It’s not currently possible to interrupt Siri and ask her to “wait, wait, sorry, repeat that last message? I was distracted.”
Besides: Even if it were possible to interrupt Siri and bend it to your distracted will, fumbling around the real world while blundering through the digital one is not the hands-free, intuitive, more intentional future of our dreams. It’s attentional slavery by another name.
–
This article first appeared in www.wired.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com