Could Robots One Day Have Legal Rights?
An MIT researcher is exploring the emotional connection between people and life-like inventions.
When Kate Darling got a Pleo—a robotic toy dinosaur that reacts to human behavior with lifelike movements—she quickly developed a bond that sparked her curiosity about the relationships between advanced mechanical inventions and people.
“I was showing it off to my friends, and I showed them how the Pleo got frightened when you held it upside down by its tale. And then at some point, it started bothering me when they did it. I thought it was interesting because I’m not a very maternal person,” said Darling, a researcher at MIT’s Media Lab. “Even knowing how the robot works, we sometimes have this natural response to them that I think might play out in interesting ways as technology gets more developed.”
Thinking of autonomous objects as equals may sound like something straight out of Blade Runner, but as advancements in the robotic world continue to surge, the ways people treat those inventions—both emotionally and legally—could change drastically.
“There’s a lot of conflicting opinions about what it means to have things that react too closely to human life. I’m looking at robots that simulate life-like qualities that we recognize,” said Darling. Currently, Darling, who is also a fellow at Harvard University’s Berkman Center for Internet and Society, is plotting her next experiment to advance her theory on the connection between humans and robots and scouting out where she can secure funding.
When everything is secured, her project will be based on spontaneous research she conducted in an uncontrolled environment during a conference in Geneva earlier this year. During that experiment, Darling and fellow researcher Hannes Gassert placed a set of participants in a room with the Pleo dinosaurs for nearly two hours. Then, quickly changing course, she asked the participants to torture and destroy the toys.
She expected some pushback from people but was surprised when participants were downright unwilling to strike the robot. “[It] freaked them out more than we thought it would,” she said. After much hesitation—and some awkward silence in the room—the participants eventually gently struck the toys. The participants’ reactions fueled Darling’s curiosity, and left her asking: “Why is it so hard for people to axe this thing over something like a simple toaster?”
In the upcoming year, Darling wants to do more workshops similar to the one in Geneva to explore that question, except this time, the research would be more extensive and would be conducted in a controlled environment. She wants to look at whether people were hesitant to destroy the robots because of how expensive they are, or if there is a common ground to destroying virtual objects as opposed to physical ones. “Even with the most simple robots, [through research] you could show a significant difference in peoples’ willingness to destroy it,” she said.
The research about emotional connectivity could also lead into some of the work Darling has specialized in in the past—specifically, how robotic property could be perceived by people in the future.
In 2012, she released a paper discussing whether robots would need their own set of rights—much like animals and humans—as they become more life-like. That research, titled “Extending Legal Rights to Social Robots,” was presented at a panel discussion during the “We Robot” convention. It stemmed from a course she co-taught in 2011 at Harvard Law School called “Rights for Robots.”
In her paper, which Darling admits is speculative and meant to conjure discussions about the future, she explores “whether projecting emotions onto objects could lead to an extension of limited legal rights to robotic companions.” Darling wrote that “social robots” generate stronger psychological attachments and could lead to legal implications that afford more advanced inventions “robot rights.”
“When robots are able to mimic lifelike behavior, react to social gestures, and use sounds, movement, and facial expressions to signal emotions in a way that we immediately recognize, this specifically targets our involuntary biological responses, causing our perception to shift,” she wrote.
She said her upcoming experiments will be useful on their own, but at the same time, she’ll be thinking about policies that could go along with her findings.
“I think there are so many different variables that need to be explored, and very few people are doing these experiments at this time,” she said. “I think it’s pretty clear from anecdotal evidence that people do respond strongly to these objects. We still need to figure out what is influencing that exactly.”
Source URL: http://www.bostonmagazine.com/news/blog/2013/12/02/kate-darling-social-robots-study/