As Surveillance Goes Mainstream, Can Boston Tame Big Brother?

Cheaper and better than ever, facial recognition technology suddenly feels like it’s lurking everywhere. But there’s still time to prove that innovation doesn’t have to mean the death of privacy.


Photo by Benjamen Purvis/Getty Images

How many times do you think your face has been captured and plugged into an AI database today? Your mug, after all, is being scanned, photographed, and recorded constantly, whether on FaceApp (hello, AI-aged selfies), Facebook and Instagram (how many beach vacation shots did you get tagged in this summer?), or even when you’re just walking up to your neighbor’s secretly surveilled front door. The technology has made life more connected and convenient than ever before, but it should also probably be making you worried. After all, facial recognition databases are growing larger every day, and they can be used to do everything from identify protesters to track people’s movements to try to sell you stuff. Meanwhile, the police and government’s practice of drawing on mugshot databases—which include people never charged with crimes—and the technology’s penchant for not correctly identifying dark-skinned faces have made its use a point of bitter criticism. Oh, and the technology is almost completely unregulated. “Face surveillance technology is dangerous when it works,” says Kade Crockford, director of the Massachusetts ACLU’s Technology for Liberty Program, “and dangerous when it doesn’t.” Luckily for us, there’s something we can do about it.

Some cities are already stepping up. San Francisco, Oakland, and our very own Somerville have all made history by being among the first to ban the use of facial recognition technology by local law enforcement. At the end of July, Cambridge proposed its own ban. Independently, the town of Brookline is working on legislation designed to oversee law enforcement’s use of surveillance tech and the resulting data. On the state level, Beacon Hill politicians are poised to consider an ACLU-backed bill, sponsored by Senator Cynthia Stone Creem, that would establish a statewide moratorium on unregulated government use of facial surveillance until regulations to protect civil liberties can be put into place. The bill was sent to the Judiciary Committee last month. But government use is merely one part of a much larger issue: It’s easy to bump into facial ID technology on any given day, even at Logan Airport, for instance, where JetBlue passengers can scan their faces in lieu of presenting a boarding pass. And more commercial uses are surely on the way. It’s only going to get harder to put the genie back in the bottle.

That means there’s no time like the present to define just what kind of tech culture we want to build here at home. “We’re reaching a pivotal moment where we need to decide what kind of world we want to leave for our children,” says Evan Greer, deputy director of Fight for the Future, a Boston-based nonprofit dedicated to Internet privacy, “and if that’s a world where they can be tracked by a for-profit surveillance state.”

To answer the question, Boston already has plenty of heavyweight thinkers in place at our top universities. “The depth of resources here and number of people collaborating across disciplines is very important in such a sobering time for technology,” says Judith Donath, a longtime researcher of social technology and online culture at MIT and Harvard.

Researchers at MIT, for example, have been leaders in exposing the technology’s aforementioned failure to recognize dark-skinned faces. Given Massachusetts’ long history of thoughtful innovation, it only makes sense that when it comes to navigating the intellectual, regulatory, and philosophical implications of what we’re already creating, we should take the lead right now. The world is changing too quickly to wait.