‘Troll Wrastling for Beginners’: Learn How to Deal With Online Negativity
A seminar at Harvard will look at ways to combat the offensive comments that plague the Internet.
For the most part, the online mantra for dealing with seedy, nasty vitriol spewed by Internet users hiding behind anonymous Twitter handles and log-in titles is ‚Äúdon‚Äôt feed the trolls.‚ÄĚ But what if feeding them logical responses is the best way to disarm their use of bigoted, racist, hateful commentary?
Research conducted by Susan Benesch, a faculty associate at Harvard University‚Äôs Berkman Center for Internet and Society, shows that‚Äôs just one way of dealing with waves of negative online speech. She thinks the positive methodology, matched with other correspondence, could one day shape conversations festering on websites, forums, and social media platforms like Twitter.
‚ÄúThere is absolutely no question that speech‚ÄĒwhat lawyer geeks like me would call speech norms‚ÄĒthat‚Äôs OK to say and that‚Äôs not OK to say can change extremely dramatically in a short amount of time,‚ÄĚ said Benesch, who will unveil what she refers to as ‚Äúdata-driven‚ÄĚ methods to decrease hatred online, during a talk at the Berkman Center on Tuesday, called ‚ÄúTroll Wrastling for Beginners.‚ÄĚ
While Benesch‚Äôs research has spanned the globe, reaching as far as Kenya where she examined and evaluated hate speech and its capacity to incite violence, through her initiative called the ‚ÄúDangerous Speech Project,‚ÄĚ she has also concentrated some of her work here in the U.S. Much of that research is rooted in showing that counter-speech, or other methods like it, really work in diminishing hatred online. ‚Äú[Engaging in counter-speech] might convince people to stop posting their awful, racist, homophobic stuff, even if it might not change their mindset in general. But it would be better than nothing to get them to stop posting that garbage,‚ÄĚ she said.
Referencing one example from her work, where people took to Twitter to hurl racially charged commentary at Nina Davuluri, an Indian-American beauty pageant contestant that was crowned the winner of the 2014 Miss America competition, Benesch said through active engagement and dialogue she saw instances of so-called ‚Äútrolls‚ÄĚ actually recoiling from their original hateful Tweets, leading to apologies to Davuluri for what was said. ‚ÄúWhat gets reported is all of the hateful speech and not the efforts to counter it. In our very early research, we have been quite surprised to find some cases where people recant and apologize,‚ÄĚ she said. ‚ÄúThe point is not to try and quantify all of the people in the world that are acting as ‚Äėtrolls,‚Äô or all the people on Twitter‚ÄĒmy goal is to find and form hard evidence using some key data on methods used to diminish hatred, other than punishment, censorship, and ignoring it.‚ÄĚ
In the case of Davuluri fielding offensive and racist Tweets, Benesch said she examined one online user that Tweeted to the newly-crowned Miss America, and actyually said, ‚ÄúI‚Äôm sorry if what I said was racist.‚ÄĚ
‚ÄúIn earlier responses, he insisted what he said wasn‚Äôt racist. It seems as if he learned something based on interaction, and that‚Äôs what I‚Äôm really getting at,‚ÄĚ said Benesch.
One of the main points of her chat at the Berkman Institute will focus on assumptions people have about ‚Äútrolls‚ÄĚ being a universal, homogeneous species of online users that can all be lumped into one category. Benesch said when defining and understanding the people blasting hate-fueled rhetoric from behind a keyboard, it‚Äôs important to understand that it can come in many different forms.¬†‚ÄúI haven‚Äôt categorized them, but that‚Äôs one of the points I‚Äôll make. Instead of just saying ‚Äėtrolls‚Äô as one group, let‚Äôs start to understand which sorts of people are posting hate online, and whether there are other ways to influence them to do less of that,‚ÄĚ said Benesch.
She will also discuss how the Internet has shaped people‚Äôs opinions, based on how users take in information from the people they may follow online, or become exposed to through more open, unregulated mediums. ‚ÄúIf the KKK had a rally, they didn‚Äôt invite you and me. They said those things to one another in situations where people were not aware of what they were saying. Now it can be expressed online where other people and members of other groups do become aware of it,‚ÄĚ she said. ‚ÄúSpeech leaps over boundaries between human communities in a way much more than it ever did in the past. It can cause terrible pain, and offense, and harm. But this can also be a way to toss speech back over those boundaries in a way that is educational, or at least in a way that could increase understanding between people that never had access to each other before.‚ÄĚ
Benesch believes that these ‚Äúsmall changes in platform architecture can improve online discourse norms.‚ÄĚ According to event details, she‚Äôll describe these findings and propose further experiments, especially in areas where online speech can be linked to offline violence.
‚ÄúWe owe it to ourselves to try to understand this,‚ÄĚ she said. ‚ÄúSometimes it sounds like people have given up on fighting this in online spaces‚ÄĒthat seems crazy to me. On the contrary, we have greater opportunities to combat it.‚ÄĚ
Source URL: http://www.bostonmagazine.com/news/blog/2014/03/24/online-trolls-harvard-talk-susan-benesch/