‘Troll Wrastling for Beginners’: Learn How to Deal With Online Negativity

A seminar at Harvard will look at ways to combat the offensive comments that plague the Internet.

Photo by Olga Khvan

Photo by Olga Khvan

For the most part, the online mantra for dealing with seedy, nasty vitriol spewed by Internet users hiding behind anonymous Twitter handles and log-in titles is “don’t feed the trolls.” But what if feeding them logical responses is the best way to disarm their use of bigoted, racist, hateful commentary?

Research conducted by Susan Benesch, a faculty associate at Harvard University’s Berkman Center for Internet and Society, shows that’s just one way of dealing with waves of negative online speech. She thinks the positive methodology, matched with other correspondence, could one day shape conversations festering on websites, forums, and social media platforms like Twitter.

“There is absolutely no question that speech—what lawyer geeks like me would call speech norms—that’s OK to say and that’s not OK to say can change extremely dramatically in a short amount of time,” said Benesch, who will unveil what she refers to as “data-driven” methods to decrease hatred online, during a talk at the Berkman Center on Tuesday, called “Troll Wrastling for Beginners.”

While Benesch’s research has spanned the globe, reaching as far as Kenya where she examined and evaluated hate speech and its capacity to incite violence, through her initiative called the “Dangerous Speech Project,” she has also concentrated some of her work here in the U.S. Much of that research is rooted in showing that counter-speech, or other methods like it, really work in diminishing hatred online. “[Engaging in counter-speech] might convince people to stop posting their awful, racist, homophobic stuff, even if it might not change their mindset in general. But it would be better than nothing to get them to stop posting that garbage,” she said.

Referencing one example from her work, where people took to Twitter to hurl racially charged commentary at Nina Davuluri, an Indian-American beauty pageant contestant that was crowned the winner of the 2014 Miss America competition, Benesch said through active engagement and dialogue she saw instances of so-called “trolls” actually recoiling from their original hateful Tweets, leading to apologies to Davuluri for what was said. “What gets reported is all of the hateful speech and not the efforts to counter it. In our very early research, we have been quite surprised to find some cases where people recant and apologize,” she said. “The point is not to try and quantify all of the people in the world that are acting as ‘trolls,’ or all the people on Twitter—my goal is to find and form hard evidence using some key data on methods used to diminish hatred, other than punishment, censorship, and ignoring it.”

In the case of Davuluri fielding offensive and racist Tweets, Benesch said she examined one online user that Tweeted to the newly-crowned Miss America, and actyually said, “I’m sorry if what I said was racist.”

“In earlier responses, he insisted what he said wasn’t racist. It seems as if he learned something based on interaction, and that’s what I’m really getting at,” said Benesch.

One of the main points of her chat at the Berkman Institute will focus on assumptions people have about “trolls” being a universal, homogeneous species of online users that can all be lumped into one category. Benesch said when defining and understanding the people blasting hate-fueled rhetoric from behind a keyboard, it’s important to understand that it can come in many different forms. “I haven’t categorized them, but that’s one of the points I’ll make. Instead of just saying ‘trolls’ as one group, let’s start to understand which sorts of people are posting hate online, and whether there are other ways to influence them to do less of that,” said Benesch.

She will also discuss how the Internet has shaped people’s opinions, based on how users take in information from the people they may follow online, or become exposed to through more open, unregulated mediums. “If the KKK had a rally, they didn’t invite you and me. They said those things to one another in situations where people were not aware of what they were saying. Now it can be expressed online where other people and members of other groups do become aware of it,” she said. “Speech leaps over boundaries between human communities in a way much more than it ever did in the past. It can cause terrible pain, and offense, and harm. But this can also be a way to toss speech back over those boundaries in a way that is educational, or at least in a way that could increase understanding between people that never had access to each other before.”

Benesch believes that these “small changes in platform architecture can improve online discourse norms.” According to event details, she’ll describe these findings and propose further experiments, especially in areas where online speech can be linked to offline violence.

“We owe it to ourselves to try to understand this,” she said. “Sometimes it sounds like people have given up on fighting this in online spaces—that seems crazy to me. On the contrary, we have greater opportunities to combat it.”

The talk will be webcast live beginning at 12:30 p.m.