How to Survive the A.I. Revolution
A human-centered approach to artificial intelligence envisions a future where people and machines are collaborators, not competitors.
In 1950, computing pioneer Alan Turing predicted that in a few decades, computers would convincingly mimic human intelligence — a feat known as passing the Turing Test. Fast-forward to earlier this year, when a Google software engineer announced that his conversations with the company’s AI-powered chatbot had convinced him that it had become “sentient.” “I know a person when I talk to it,” he told the Washington Post. (Google said that he was “anthropomorphizing” the bot and fired him.)
As AI technologies such as natural language processing, machine learning, and deep learning rapidly evolve, so does the idea that they will go from imitating humans to making us obsolete: Elon Musk has warned that a superintelligent machine could “take over the world.” The fantasy — or nightmare — that people and AI will become locked in competition is remarkably enduring. It is also distracting us from AI’s true potential.
So argues Erik Brynjolfsson, a professor of economics and of operations, information, and technology (both by courtesy) at Stanford Graduate School of Business and a fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). In a recent paper, “The Turing Trap,” Brynjolfsson contends that too much attention has been paid to the idea that algorithms or robots will become substitutes for people. Instead, he believes that shifting our focus to envision ways that AI can work alongside people will spur innovation and productivity while unlocking economic benefits for everyone
Using AI to automate human intelligence and labor is “an incredibly powerful and evocative vision, but it’s a very limiting one,” Brynjolfsson says. The alternative is augmentation: using AI to complement people by enabling them to do new things. “Both automation and augmentation can create benefits and both can be profitable,” he says. “But right now a lot of technologists, managers, and entrepreneurs are putting too much emphasis on automation.”
Beyond the set of tasks that people can do and the limited set of tasks that can be automated is a much larger range of work that we could do with assistance from machines — the universe of augmentation. With advances in AI, we could simply mimic humans more closely than ever. Or, Brynjolfsson says, people could take a more expansive view of AI where “they’ll be able to do a lot more things.”
Looking Beyond Automation
Other researchers who are thinking critically about the future of this transformative technology are also convinced that it must go beyond automation.
“The idea that the entirety of AI is a field aimed toward automation is actually a bit of a misconception,” says Fei-Fei Li, the codirector of HAI and a professor of operations, information, and technology (by courtesy) at Stanford GSB. She says we need to “tease apart the hype” surrounding AI and look at its broader applications, such as deciphering complex data and using it to make decisions as well as actuating vehicles and robots that interact with the world.
Li thinks automation can play an important role in protecting people from harm in jobs like disaster relief, firefighting, and manufacturing. It makes sense for machines to take on tasks where “the very biology of being human is a disadvantage.” But, she says, “there’s so much more opportunity for this technology to augment humans than the very narrow notion of replacing humans.”
Machines have been assisting people and replacing their labor for centuries, explains Michael Spence, an emeritus professor of economics and former dean at Stanford GSB. Yet the current digital wave is different from the wave of mechanization that defined the Industrial Revolution. Unlike their 19th- and 20th- century predecessors, which required constant human intervention to keep running, AI tools can function autonomously. And that, Spence warns, is taking us into “uncharted territory.”
“We have machines doing things that we thought only humans could do,” he says. These machines are increasingly supervised by other machines, and the idea of people being taken out of the loop “scares the wits out of people.” The scale of economic disruption that
AI could cause is difficult to predict, though according to the McKinsey Global Institute, automation could displace more than 45 million U.S. workers by 2030.
Jennifer Aaker, PhD ’95, hopes that AI will transform the way we work — for the better. Aaker, a behavioral scientist and a professor of marketing at Stanford GSB, cites a recent survey by Gartner in which 85% of people reported higher levels of burnout since the pandemic began. Can AI help alleviate disconnection and dissatisfaction on the job? “The increasing amount of data from the last couple of years will make this question become more pressing,” she says.
For the past three years, Aaker and Li cotaught Designing AI to Cultivate Human Well-Beingopen in new window, an interdisciplinary course that explored ways to build AI that “augments human dignity and autonomy.” Aaker believes if augmentation can increase growth, education, and agency, it will be a critical way to improve people’s happiness and productivity. “We know that humans thrive when they learn, when they improve, when they accelerate their progress,” she says.
“So, to what degree can AI be harnessed to facilitate or accelerate that?”
Augmentation in Action
Many of the potential uses of artificial intelligence have yet to materialize. Yet augmentation is already here, most visibly in the explosion of AI assistants everywhere from dashboards and kitchen counters to law firms, medical offices, and research labs.
The benefits of augmentative AI can be seen in the healthcare industry. Li mentions one of her recent favorite student projects in the course she coteaches with Aaker, which used AI to prevent falls, a common cause of injuries in hospitals. “Patients fall or have rapidly deteriorating conditions that go undetected,” she says. Yet it’s not feasible for a nurse or caregiver to constantly monitor people who are at risk of falling. As a result, “there are procedural errors, dark spaces. How do you know a patient is about to fall? These are things you can’t do labs on.” Smart-sensor technology can give healthcare providers an “extra pair of eyes to augment the attention of human caretakers and to add information and to alert when something needs to be alerted.”
AI can also make short work of necessary yet tedious tasks. Spence mentions how “pure augmentation” is helping doctors by using machine learning to sift through mountains of medical literature. “It can pick off, with reasonable accuracy, the articles that are particularly important for a specific doctor with a specific specialty patient.” Similarly, Aaker cites a project from her course with Li where nurses and doctors used an AI tool to process paperwork, allowing them to spend more time connecting with patients. “Imagine how that frees up medical professionals to do the work that inspired them to get involved in the field in the first place?”
That may be one of the most compelling selling points for augmentation: It liberates people to focus on things that really matter. Aaker cites AI tools that help around the house. “Parents can get burdened by household tasks,” she explains. “What the AI is doing is removing the boring or useless types of tasks so that parents can spend time in ways that are more meaningful.”
Machine learning tools that can quickly digest large amounts of data are widely available and are being employed to inform decision-making in medicine, insurance, and banking. In many of these cases, AI is not the ultimate authority; instead, it is a tool for quickly recognizing patterns or predicting outcomes, which are then reviewed by human experts. Keeping people in the loop can ensure that AI is working properly and fairly and also provides insights into human factors that machines don’t understand.
This type of assistive technology, Li says, “is a win-win. AI is not taking away from the human element, but it’s an enabler to make human jobs faster and more efficient.”
Defining AI’s Values
Building a future where AI boosts human potential requires leadership from the people who will be over- seeing its implementation. Before business leaders can embrace augmentation, Li sees it as imperative to educate them about “the unintended consequences” of the tech they’re adopting. One of HAI’s main purposes is to help business leaders think through the big questions surrounding AI: “How it should be guided, how it should be governed, and how it reflects society’s values.”
“Those things are a bigger part of the challenge than just getting the state-of-the-art machine learning algorithm,” says Susan Athey, PhD ’95, a professor of economics at Stanford GSB and an early adopter of machine learning for economic research. But these questions of governance and ethics can’t be left entirely to AI developers. “Universities are putting out thousands of engineers every year to go and build these systems that are affecting our society,” Athey says. “Most of their classes don’t get to these topics.”
That makes it all the more urgent that business leaders — and business students — develop a framework to guide real-world applications of AI. “That framing is not going to come from a typical master’s degree holder in engineering,” Athey says. “It’s going to have to come from businesspeople, from those with a background in social science, ethics, or policy — but they need to understand the technology deeply enough to do the framing.”
For now, many corporate leaders are figuring out how AI can quickly boost profits. “There’s a gold rush going on right now about ways to apply these incredibly powerful machine learning techniques,” Brynjolfsson says. While there have been incredible advancements in AI, “the big gap is in getting the economics and business side to catch up. I’m trying to get my fellow economists, my fellow business school colleagues, managers, and entrepreneurs to figure out new ways to implement new business models. How can we do this so it’s consistent with our values?”
Athey says that campus institutions such as HAI and Stanford GSB’s Golub Capital Social Impact Lab, which she directs, can provide essential guidance for these discussions. “Businesses are going to make investments that align with their bottom line,” she says. “But Stanford can play a role if we do the basic R&D that helps people use AI in an augmented way that can influence the trajectory of industry.”
Considering a “diversity of values” is critical to determining the direction AI will take, Li says. “It’s about including people who have been raised on something more than an engineering education and sci-fi culture,” she says. “Our field needs people who want to impact real people in meaningful ways — not merely solve problems in the abstract.”
Payoffs and Progress
Even if we look past the hyperbole about AI run amok and accept the argument that it shouldn’t be viewed simply as a substitute for human capabilities, what’s the incentive for companies to pursue augmentation if full automation is easier and cheaper?
Automation can be used “to replace human labor and drive down labor costs,” Brynjolfsson acknowledges. While that can help the bottom line, it is “not where the big payoff is.” Augmentation clearly offers greater economic benefits to employees who wouldn’t be swapped out like old parts. But it would also provide expanded opportunities and options for employers and consumers.Quote“Both automation and augmentation can create benefits and both can be profitable. But right now a lot of technologists, managers, and entrepreneurs are putting too much emphasis on automation.”
He notes that technology has already boosted living standards enormously, mainly by creating new capabilities and products rather than making existing goods and services more cheaply. Instead of rushing to automate jobs and tasks, Brynjolfsson hopes business leaders will think harder about innovation and ask themselves, “What new things can we do now that we could never have done before because we have this technology?” Answering that question, he says, will “ultimately create more value for the shareholders and for all of society.”
Spence also believes that augmentation would lead to more inclusive growth, while automation would worsen current economic trends. Although the past era of mechanization had an initial “pain period” as workers scrambled to adopt new skills, it “contributed to the productivity and the earnings of what has come to be called the middle class.” While the people who owned the machines got rich, income was more widely distributed than it is now. “There’s a fair amount of evidence that the digital era has contributed to the polarization of jobs and income,” Spence says. Automation would further shrink the proportion of GDP going to the middle class and working class, leading to even more concentration of wealth. In that scenario, Spence says, “inequality worsens.”
He agrees that a more creative approach to AI is needed. “Consciously biasing the evolution and development of AI in the direction of augmentation is the right way to think about it,” he says. This will mean “using AI and digital tech to bring key services to people who now have limited access to them,” such as the 5.5 billion people living in developing countries. “There are values and policies that affect these incentives and so you want to try to operate on them in such a way that the benefits are broadly available to people. Not concentrated, say, on the owners of capital, or even more narrowly on the owners of some sort of digital capital.”
Those incentives aren’t in place yet. Current tax policies favor companies that install machines instead of hiring workers, Spence explains. “If you shifted the tax system so it was less favorable to capital and more favorable to employing people, you’d probably get more focus on people and maybe more focus on augmentation as well,” he says.
Brynjolfsson agrees. “The market does not automatically get the balance right in many ways. Our policymakers have put their thumb on the scale to steer too much investment toward mimicking and automating more jobs and using capital merely for labor substitution,” he says. That could lead to a situation where AI brings prosperity to a few and disempowers the rest — the Turing Trap.
We’re not there yet. Artificial intelligence is just beginning to have an impact, Brynjolfsson says. The challenge is to chart a path to a future where people remain indispensable. “Most progress over the past thousands of years has come from doing new things that we never did before — not from simply automating the things that we were already doing.” That will require us to tap into a superpower that can’t be programmed into a robot: imagination.
—
This article first appeared on www.gsb.stanford.edu
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +971 50 6254340 or engage@groupisd.com or visit www.groupisd.com/story