The Robots Are Coming

For our jobs. Our military. And our current way of life. That is, unless a handful of local AI wizards can stop them. —By David Talbot


Photograph by Tony Luong; hair and makeup by Claudia Moriel; styling by Tommy White; post-production by Zach Vitale

On a July afternoon in Provi­dence, more than 30 of the nation’s ­governors gathered in a convention-­center ballroom to learn about the future from a man who holds much of it in his hands: Tesla and SpaceX CEO Elon Musk.

After a glowing introduction by Nevada Governor Brian Sandoval, who worked with Tesla to bring a battery-building “gigafactory” to the state, Musk ascended the purple-lit stage. Clad in a fitted black suit with no tie, the South Africa–born tech titan, who’s about as close to a modern-day Howard Hughes as we’re likely to see, began the question-and-answer session on a positive note, explaining how his electric cars and other ventures could help bring about a better world. “I think that the thing that drives me is I want to be able to think about the future and feel good about that,” he said. But when the topic of artificial intelligence came up, his message took a darker turn.

“Somebody asked me to ask you this,” Sandoval said, gesturing toward the audience. “Are robots going to take our jobs, everybody’s jobs, in the future? How much do you see artificial intelligence coming into the workplace?”

Musk paused briefly. “I have exposure to the very most cutting-edge AI,” he said, “and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react…. AI is the rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we are reactive in AI regulation, it’s too late.” Then he issued a dire warning: “AI is a fundamental, existential risk for the existence of human civilization.”

As for the question of job-stealing robots? “There will certainly be a lot of job disruption, because what’s going to happen is that robots will be able to do everything better than us. I mean all of us,” he continued, then sighed. “Uh, yeah. I’m not sure exactly what to do about this.” A collective chuckle rippled across the audience, but Musk wasn’t laughing: “This is really the scariest problem to me.”

He’s hardly the only one who’s worried. Theoretical physicist Stephen Hawking, for instance, has also warned about the future dangers of artificial intelligence, or AI, shorthand for computation that mimics human learning and problem solving (but is often also used to describe advanced algorithms and tools for mining enormous data sets). Spawned as an academic field here in New England during the 1950s, AI is now replacing factory workers with robots and call-center employees with software; helping spread propaganda and partisan memes; and could one day produce deadly autonomous weapons that operate without any human oversight. At the same time, it’s also doing plenty of positive things: Autonomous vehicles, for instance, could someday prevent needless carnage (right now, about 1.25 million people die on roadways worldwide every year). Drug development and medical diagnostics, meanwhile, could be revolutionized by the analysis of vast troves of genomic and personal health data.

We may have helped create the AI monster here in the Hub, but it turns out we’re also the ones fighting to keep it on a leash, with a Justice League of passionate geeks working furiously to ensure the technology is used for the public good. The MIT Media Lab, for example, is building a system that allows you to control your own Facebook news feed rather than leaving it up to the mega company’s closed algorithms to decide what you see and in what order. Meanwhile, professors at Harvard Law School are working on ways to eliminate unfair bias as algorithms seep into the justice system, and other local leaders are trying to coordinate an international treaty that preemptively bans fully autonomous combat machines.

Some of this is happening under the umbrella of a new $27 million research fund, administered in part by the MIT Media Lab and Harvard’s Berkman Klein Center for Internet & Society (where I am a fellow). They’re starting to bring together technologists, lawyers, ethicists, and others to get ahead of the negative effects—and realize the broader societal promise—of AI by asking some very serious questions: Who, if anyone, reviews whether machines work as intended? What data fuels the machines’ algorithms, and who designs and uses those algorithms? And who ultimately wins and loses in the battle to automate every aspect of our lives?

 

When it comes to the history of artificial intelligence, Cambridge looms large. Its outsize influence on the field dates to the 1940s and 1950s, when Norbert Weiner, an MIT math professor, theorized that intelligent behavior could be replicated by machines. Weiner also helped develop theories of robotics, automation, and computer control systems before anyone had coined the term AI. Meanwhile, a svelte young man named Marvin Minsky walked into Harvard Yard as an undergrad in the late 1940s. These days commonly referred to as one of the founding fathers of AI, the World War II vet found himself fascinated by the processes underpinning intelligence and thinking—so much so that after graduating with honors, he moved on to Princeton University to earn his PhD in mathematics and focus on how computers, at the time an emerging technology, might mimic the processes underlying human learning and decision-making. It was a good few years: Minsky built the world’s first neural network simulator, called SNARC, soon after matriculating, and then married a pediatrician named Gloria Rudisch.

But it wasn’t long before the computer scientist journeyed back north, first to Harvard. In 1955, he and a Dartmouth assistant professor, John McCarthy, co-organized the Dartmouth Summer Research Project on Artificial Intelligence, cementing AI (McCarthy coined the term) as an academic field of study. At the heart of the conference, spelled out in a paper, was the idea that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” By 1959 Minsky and McCarthy had cofounded MIT’s AI Laboratory and emerged as leading theorists on ways to make that actually happen.

As time went on, the visions of Weiner, Minsky, McCarthy, and other academics slowly took shape in prototypes and lab presentations around the country and the world. IBM researchers, for instance, demonstrated the idea of “machine learning” with a computer program that got better at checkers through self-play. (That basic idea—that software can “learn” through experience and improve as more data becomes available, would eventually underpin everything from speech recognition to image understanding.) Research groups around the country worked on building repositories of subject-matter knowledge, as well as software and hardware that could resemble networks of human neurons.

Progress was slow, and through much of the 1980s AI remained mostly a research topic. Since then, however, the computing power (and storage) needed to handle and process data have become cheap and widely available, enhanced by the rise of the Internet. As the need for new business tools has emerged—such as analytics that assess risk for insurance companies and banks—investment in the field has continued to soar.

Minsky died in Boston last year, and despite his belief that people would someday create machines that mirror our own intelligence, today the available technologies are still far cruder than a human brain. But for specific tasks, certain applications of AI are already making life easier. Speech recognition is now on nearly every smartphone (thank heavens for driver-friendly voice-to-text messaging). Technology allowed an IBM computer to beat two human champions of Jeopardy! in 2011. Computer vision systems detect real-world highway lane markings and obstacles well enough to bring us to the brink of a self-driving-car revolution. Dexterous, teachable robots build cars and pluck pallets at distribution centers. Slowly, the machines we’re building are coming to life. And, in some ways, taking on a life of their own.

 

In an MIT lab awash in atrium light, a group of researchers are preparing to do battle against some of those very machines: the algorithms that Facebook and Twitter deploy to shape everything we see on social media, for better or ill.

It’s a warm day in late August, and Rahul Bhargava, a soft-spoken research scientist, emerges from the MIT Media Lab’s Center for Civic Media to talk to me about the problems these powerful algorithms present—and what he and his colleagues are doing to address them. Perched on a purple sofa next to tables adorned with antique adding machines, Bhargava explains how our Twitter and Facebook posts are sorted and ranked, augmented with suggestions, and interspersed with advertisements, all according to what the companies have determined should interest us. It’s closed, proprietary, and automated, and there’s nothing the user can do about it. “One of the problems is that we can’t control what we see—yet what we see has a lot to do with what we believe,” Bhargava explains.

Boston techies to the rescue: Rahul Bhargava (right) and his colleague Alexis Hope have created a technology that lets you decide which posts are prioritized—or minimized—in your Facebook stream. / Photograph by Ken Richardson

Recent news has been sobering. In September, Facebook admitted to the U.S. Senate Intelligence Committee that Russian agents seeded Facebook with phony accounts, false postings, and at least 3,000 known political attack ads in the run-up to the 2016 election, amplifying right-wing misinformation campaigns. And the truth is, all users are at the mercy of an algorithm: If you’ve been conversing with a given “friend,” you might see more from that person; and if something viral hits the ’net, you might get spammed by many “friends” sending the same thing.

But Bhargava and his colleagues Jasmin Rubinovitz and Alexis Hope are, in a way, raging against the machine, breaking open those algorithmic black boxes with a technology called Gobo (a reference to stage-lighting filters). Launched this month, it’s essentially an app that uses a programming interface to gain access to Facebook and Twitter postings made by you or other Gobo users. The result: You get to decide which posts are prioritized—or minimized—in your stream. The team has established six control sliders, including ones for “rudeness” (using technology codeveloped by Google and the New York Times that’s meant to clean up comment strings); for the gender of the poster (otherwise known as the “mute men” feature); for brands, making your news feed ­commercial-free; and for the range of political views to which you’re exposed. “Advanced algorithms are mainly the province of programmers within these big companies, and demand really close scrutiny,” says Ethan Zuckerman, director of MIT’s Center for Civic Media and an architect of Gobo. “What we are now seeing is a group that has strong Boston roots coming up with ways to let us review, interrogate, and build alternatives to these technologies.” In other words: Zuckerman and his colleagues are using algorithms to wrestle control away from social-media monoliths and put it back in your hands.

 

Turn on the TV these days, and there’s a good chance you’ll meet Molly, the kid who never stops inventing machines to do annoying human chores better than humans. She figures out a way to deliver Girl Scout cookies to neighbors using a pulley system operated from her bedroom window, and sweep the floor with two dusters attached to a pink toy car. And as she grows up into a young GE employee, she reprograms robots to perform an inspection of newly manufactured jet-engine fan blades. “It’s running much faster now,” she says to her supervisor with a smile. “See?”

That’s General Electric’s latest ad, and it’s meant to show the promise and possibility of automation (and the potential of young female technologists). In fact, the company is already working on generating AI products for medicine, via a newly announced partnership between GE Healthcare and Partners HealthCare. The initiative will dream up ways to automate across the industry, including analyzing thousands of medical records, which will reduce the need for humans to do so. Indeed, the same automation and IT-related advances that are drawing technologists to GE’s Fort Point headquarters could also be taking workers away from conventional manufacturing, service, and image-analysis jobs. Wendy’s, for example, will show some of its employees the door when it installs 1,000 self-serve kiosks—which won’t show up late, call in sick, or demand a paycheck—by the end of the year.

At MIT’s Sloan School of Management, professor Erik Brynjolfsson and research scientist Andrew McAfee have documented a “hollowing out” of the middle class as jobs dry up in factories and elsewhere, arguing that AI and related technologies are partly to blame for the past two decades of unusually slow job growth relative to national productivity. “We’ve been warning about how technology is racing ahead and leaving people behind,” Brynjolfsson says. “A lot more people are coming on board with that view.”

As low-skill jobs are killed off faster than new ones are created, then, a college degree has become more important than ever. And that’s where the Boston startup AdmitHub comes in. On a September afternoon, I rode across the Charles River to Allston, where a low-slung warren of offices along Western Avenue is home to the “I-Lab,” which offers Harvard affiliates cheap workspace to start new businesses. There I met Andrew Magliozzi, cofounder and CEO of AdmitHub, which is using AI not to further enrich elites or to gather marketing data, but to create a text-messaging platform that guides kids through the bureaucratic steps involved in moving from high school to college.

It’s a problem that needs addressing: Every spring, 2.5 million students are admitted to college in the United States, according to AdmitHub, but by September, up to 350,000 fail to enroll, either because they couldn’t navigate the financial aid process, lacked support at home, or simply got cold feet. These are the kids AdmitHub tries to help. Its AI system allows students to ask questions via text message, and get answers—generated through so-called natural language processing (a version of the systems IBM used to win Jeopardy!)—to help them with the often-daunting task of filling out financial aid forms, obtaining code numbers for submitting test scores, and answering everyday questions about courses, housing, or counseling programs. At times the system even takes charge, weighing in with a friendly multiple-choice question such as “What about college has you most worried?” When kids answer (with choices like “fear of being away from home” or “finding a way to pay tuition”), the system offers emoticon-decorated encouragement and links to resources. If the system doesn’t have the answer, human employees eventually step in. “Every click and interaction can be a mechanism for training the system to be more accurate and better through reinforcement learning,” Magliozzi says.

Score one for using AI for the public good. Indeed, a randomized controlled study at Georgia State University—where some 70 percent of students are eligible for Pell Grants—found that kids using the system (there named after the college mascot, Pounce) did far better in completing bureaucratic steps. The impact was substantial: Isolating the effect of the app, it meant that more than 100 additional students (3.9 percent) actually enrolled. “A lot of first-gen college students feel apprehensive asking an admission officer something that they think might sound stupid,” Magliozzi says, “but they’ll ask way more questions of Pounce.”

AdmitHub isn’t the only local organization thinking about ways in which AI can be leveraged to create new opportunities; in fact, in 2014, MIT’s Brynjolfsson and his colleagues established the Initiative on the Digital Economy to help organizations and businesses do just that. As part of the effort, just last year they launched the Inclusive Innovation Challenge, offering more than $1 million in prizes to technology companies around the world that “strive on behalf of working people at the middle and base of our economy.” AdmitHub was a $150,000 winner in this year’s contest. And some of the other finalists are working to do similar things in developing countries, such as Tuteria, an online platform in Nigeria that received $35,000 to help match qualified instructors with people seeking skills. “Our economy is undergoing a radical transformation,” Brynjolfsson explains. “But we can turn technology to our advantage and ensure that more people benefit from rapidly advancing innovations.”

 

Blasted holes in walls and roofs. At least 25 civilians killed, and many more injured, in three U.S. strikes alone. That was the landscape Human Rights Watch senior researcher Bonnie Docherty faced when she traveled to Afghanistan in 2002 to interview survivors of cluster munitions, which disperse hundreds of smaller bomblets that can kill and maim innocent people and then linger like land mines. At the time, Docherty was unsure if countries would ever agree to bring an end to these dangerous weapons. Yet six years later, she participated in negotiations of a treaty, the Convention on Cluster Munitions (CCM), that completely banned them. The treaty’s adoption demonstrated the potential for international law to reduce the unacceptable harm that certain weapons inflict.

Now Docherty has turned her attention to another class of weapons that could endanger civilians—fully autonomous ones, powered by AI. She and a host of ethicists, advocates, and legal scholars worry about the possibility of a future in which wars involve robots that might have trouble discriminating between ordinary people and combatants—or be vulnerable to misuse by rogue regimes. “Fully autonomous weapons would face major obstacles in complying with existing international law,” Docherty explains, “and would cross a moral red line by making life-and-death decisions on the battlefield.”

Docherty would like to see a treaty similar to the CCM ban for autonomous war robots—airborne, tracked, wheeled, or stationary machines that could choose targets and fire upon them without any human intervention. Now a lecturer at Harvard Law School as well as a senior researcher at Human Rights Watch, Docherty has made the case for a ban through numerous publications and lobbied governments at United Nations diplomatic meetings to take action—an effort that will continue this month at a major UN disarmament meeting in New York City. She emphasizes that the time to ban new forms of weapons is before they are developed and massively deployed. And when it comes to fully autonomous armaments, that time may not be too far in the future: Already, sentry robots guard the South Korean side of the demilitarized zone, capable of detecting intruders and aiming and firing munitions, but with humans making the final call. Airborne drones work in a similar way, but it wouldn’t be a huge leap to take humans out of decision-making processes. And that would “revolutionize warfare, and not for the better,” Docherty says. “No one could be held responsible for the use of weapons that are fully autonomous.”

While some scholars assert the topic is more nuanced, arguing that such weapons should be regulated, not banned, momentum is building for placing so-called lethal autonomous weapons systems essentially in the same category as mustard gas, antipersonnel land mines, or cluster munitions. A full ban was advocated in a 2015 open letter that’s now been signed by more than 17,000 people, including more than 3,000 robotics and AI researchers, followed in 2017 by one signed by technology executives. “This would be the third revolution, after gunpowder and nuclear weapons,” Docherty explains. “This could be massive in scale, and proliferate widely. And it’s not just the good guys who will get them and use them.”

 

The newest clerk to hit the courtroom doesn’t have a JD. Instead, it lives inside a computer. And it’s helping judges decide the fates of criminal defendants in some jurisdictions across the country: how much bail they’ll have to shell out, how many years they’ll serve in prison, and how long they’ll have to wait to get paroled. Many organizations believe that the risk-assessment algorithms offered by several vendors could allow for fairer, more-evidence-based sentencing by human judges. Others, however, are finding serious problems with how such tools are performing in the real world: In one story, ProPublica documented cases in which black defendants were given higher risk scores than white defendants arrested for similar offenses, even though the latter in some cases had worse criminal histories. In fact, in an examination of risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014, the nonprofit newsroom found that the formula was “particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”

One of the problems, explains Chris Bavitz, is that the closed and proprietary data sets and methods powering these products may be based on historical data that reflects past bias in the judicial system. Bavitz, the managing director of Harvard Law School’s Cyberlaw Clinic, and his colleagues are assembling what they hope will be the definitive database of judicial risk-assessment products. “We are trying to create a one-stop-shopping resource,” Bavitz explains from his immaculate office overlooking Massachusetts Avenue. “Here are the products, here is what they purport to do, here are jurisdictions that use them, and here is the extent to which they make algorithms available for review.” While Bavitz believes there is great potential for AI to add fairness to the court system (given the history of human bias, and even evidence that judges get grumpier as afternoons wear on), his focus is on making sure that same bias doesn’t creep into the very technology designed to eradicate it. “I don’t necessarily come into this with a dystopic view that this is all bad,” he says. “Some of the worst bias you see in the justice system comes from individuals.”

Meanwhile, Margo Seltzer, a professor of computer science at Harvard’s John A. Paulson School of Engineering and Applied Sciences, and her colleagues have come up with methods that can be used to build judicial risk-assessment models that don’t merely spit out a numerical score, but also describe in plain English how the score was reached (for example, it might explain that a defendant was at greater risk of reoffending because his age fell within a certain bracket). “Ultimately, we hope our research will show that we can supplant ‘black boxes’ with algorithms that can also show the users how the answer was produced,” Seltzer says.

Over at the MIT Media Lab, Joy ­Buolamwini, a PhD student, is working on a similar problem: highlighting the bias in the algorithms that power facial-analysis and -recognition technologies. After noticing that her dark-skinned face was not recognized by such a system, she started calling attention to the fact that a set of pictures used to “train” these technologies used predominantly white, male faces—just the sort of situation that could lead to biases (consider that a face match can be used as a basis for police questioning). Buolamwini is now finishing a paper assessing how accurate these technologies are, and launched the Algorithmic Justice League to provide a place for people to report bias in algorithms.

At a recent luncheon gathering organized by the Berkman Klein Center, MIT Media Lab director Joi Ito echoed what Bavitz told me, explaining that the main problem with some commercial uses of AI technologies is unaccountability and a lack of transparency. “There’s this weird moral hazard where even though you have agency, you are able to push off this responsibility to the machine,” Ito said. “The problem right now is that these algorithms are running on data sets and training systems that are closed. We see this happening in a variety of fields—I think we see it happening in the judiciary, which is a scary place for it to be happening.”

 

Back in the Center for Civic Media, Bhargava, the Gobo research scientist, recalled a day when his Twitter feed filled with people’s retweets of a photo showing sparse attendance at a speech given by President Donald Trump. The apparent message? Look—this president is unpopular. But the photo was taken before the event even started. Bhargava’s social-media feed had vomited out a misleading view of the world, and a false statement had been massively amplified.

There’s a lot of this going around these days. In that particular instance, a liberal viewpoint held sway. But a growing body of evidence shows that echo-chamber-style partisanship is worse in the other direction, with social sharing tilting toward misleading right-wing stories, especially in the months before the 2016 election. Corporate social-media algorithms are engineered to keep you engaged. Vetted for accuracy, balance, or challenges to dogma? Not so much.

So Bhargava decided to show me what more-civic-minded technology could do. Opening his Twitter account, he noticed a mix of technology and political stories cited by the people he followed. But then he fired up Gobo, activated the “mute men” feature, and watched a new reality come into view: All of the women he followed were talking about a sexist statement made on Capitol Hill by a House member from Alaska. By taking control of the algorithm that controlled his data, he saw something he otherwise would have missed. “I’m convinced that letting people control these algorithms will be both exciting and surprising to people,” he said. “We’re creating Gobo as an alternative reality, to get more people to think about the possibilities if we return to the roots of what drove innovation on the Web. When your data can move around like this, you can try innovative ideas, and think differently.”

Thinking differently is what the current trend in AI research is all about. Whether it’s Gobo, AdmitHub, or an effort to ban killer robots, Boston’s academic and business leaders are pushing ahead to give users control, democratize powerful technologies, and make sure they are used for the public good—and this can only be a positive. Even if it means guys like me will get muted now and then.