Arguing for a more precise definition of harmful speech, recipient of the Human Rights Institute 2015-2016 Dissertation Writing Fellowship Cathy Buerger gave a presentation for the Povolny Lecture Series in International Studies on Monday, Jan. 29 in the Warch Campus Center cinema.
The Povolny Lecture Series is an annual sequence organized by Lawrence University’s Government Department where speakers from outside the school are invited to give a talk to students, faculty and members of the Appleton community. The series is inspired by the legacy of long-time Emeritus Professor of Government and Henry M. Wriston Professor of Social Sciences Mojmir Povolny who taught at Lawrence from 1958 to 1987. The lecture series includes topics that relate to international relations and current social issues.
Buerger, a research and communications associate from the Dangerous Speech Project, presented “Dangerous Speech: A Global Perspective.” She brought up the very real concern of “hate speech” in our communities, and of certain groups being targeted for insults or slurs. While Buerger acknowledged that most people are against “hate speech”, she argued that there is no consensus on what “hate speech” actually is. Because of this, there is very little agreement about how to approach “hate speech”, whether it is something that should be criminalized or simply resolved within the community.
Buerger offered one possible step forward in the debate about “hate speech.” In her lecture, she differentiated between “hate speech” and what she called “dangerous speech.” While “hate speech” and “dangerous speech” often overlap, they are not the same thing. For Buerger, “dangerous speech” is language that incites violence or leads to the persecution of certain groups of people. On the other hand, “hate speech” is vulgar and hurtful towards its targets, but does not necessarily lead to actualization through violence.
At this, Buerger transitioned into explaining how different countries approach “hate speech” through their legal systems. She said that while many countries start with good intentions in prohibiting “hate speech” to protect the dignity of minorities, legal methods can easily morph into ways to quash dissent. For example, for President of Venezuela Nicolás Maduro, calling the government corrupt now qualifies as “hate speech” and can lead to jail time.
Since much of “hate speech” is generated online, some countries like Germany and Russia have enacted new laws to get social media giants like Facebook and Twitter to exert greater control of their content. In Germany, “hate speech” must be taken down by social media administrators within 24 hours, or the company will face a steep fine. Due to the sheer amount of content on their platforms, social media companies have invented algorithms to detect and take down “hate speech” without human oversight.
Buerger expressed several concerns about this approach to “hate speech” in our real life or online communities. When social media platforms use algorithms, the public may not realize what they are taking down. Algorithms also cannot differentiate between “hate speech” and people reporting accounts of having experienced “hate speech”. For example, Buerger brought up a case where one woman’s post about a man calling her a racial slur was taken down, depriving her of her right to raise awareness. This can lead other people to be ignorant about how often “hate speech” actually occurs. When mistakes are made, platforms do not have good appeal processes for people to dispute the decisions of algorithms.
Finally, Buerger spoke of recent research that shows that when people can respond to “hate speech” online and discuss why it is not acceptable, it can lead the community to be better educated on the whole. It can also amplify the problem of “hate speech” in our communities, addressing it as a very real and unfortunately common problem. Engaging with perpetrators of “hate speech”, according to Buerger, creates multiple narratives and makes it harder for people to be ignorant.
“Differentiating between ‘hate speech’ and ‘dangerous speech’ won’t solve everything,” Buerger conceded. “However, it will allow us to have better conversations, understand how speech works and identify situations that are susceptible to violence.”