Understanding – and Countering – Hate Speech Is the Fight of Our Age

Eight years ago, a Norwegian white supremacist named Anders Behring Brievik murdered 77 people, many of them children, in and around Oslo. The details of the carnage he created are available on Wikipedia and won’t be repeated here. His attack shocked the consciences of the world, given its brutality and the targeted nature of the violence against children. But what also shocked the world was why he decided to go on a mass murdering rampage in the first place: race.

Brievik was pulling from a growing tradition in the European far right: a fear of “Islamization” driven by immigration policies. Especially in the years since, the creeping white supremacism in Europe, the U.S., and Australia and New Zealand have focused less on domestic minorities and more on immigrants. When many white supremacist figures make political statements, especially about race or class, behind those statements you will ultimately find a loathing of dark-skinned people “outsiders” they simply cannot accept as a part of “their” society. This is how even the president can tell non-white U.S. congresswomen, including those who were born here, to “go back where you came from.” It is a statement built from the belief that whiteness is Americanness, so therefore non-whiteness (even if you were born here) is non-American.

This sort of rhetoric is a mortal threat to the idea of a multicultural law-based democracy, one based on civic institutions instead of race.

When Brievik murdered those dozens of children, however, I downplayed this threat such belief systems post. In an article for The Atlantic, I argued that focusing on Brievik’s 1,500 page manifesto was hypocrisy due to a number of weird political triangulation decisions that feel unfamiliar to me now (in short: I was working at a think tank that demanded studious non-partisanship in our public writing, even when it didn’t make any sense to do so, and I regret letting myself be captured by those pressures).

I was wrong to do this. While I do think we still don’t understand the precise mechanism by which someone shifts from believing abhorrent ideas to acting on them, there is copious research demonstrating that abhorrent beliefs do lead to increases in ethnic violence. If a belief system is encouraging of violence and dehumanization then it has to be considered alongside the violent actors who say it inspires them. Both on first principles, and on the merits I mustered to make my case, I was wrong to say Brievik’s belief system was immaterial to his decision to commit mass murder.

I repudiate my 2011 article.

Instead, I think we need to take a few moments to understand how, as the debate over hate speech is manipulated in profoundly bad faith by right wing public intellectuals, the proliferation of hate speech is having a measurably bad effect on us as a society. And, realizing that, I’ll also discuss why placing faith in internet companies to fix the problem absolves everyone else of the need to act.

Why Now

The recent mass shooting in El Paso, TX was something of a watershed. It wasn’t the first time a white man inspired by a bigoted belief system and egged on by a violence-thirsty media ecosystem was driven to commit mass violence, but the scale on which it happened — and the intention with which he targeted the Latino community — was horrifying. In short, a man quite literally driven mad by the racist hate-speech directed toward Latino Americans in right wing media drove hours to a Walmart with the express purpose of murdering people of Latino descent. He even posted online a manifesto detailing as much.

I recently explored why and how racist hate speech from the White House can have a profound effect on our society by altering the terms and the redlines through which people tend to view issues. In other words, the White House can alter social attitudes for better or worse, and this White House is doing exactly that by altering social attitudes for the worse.

While hate speech from non-official sources is usually discounted as a causal factor in committing violence (as I argued for Brievik), there is, in fact, a lot of research that suggests it can be a powerful factor in operationalizing the attitudes being driven by leaders.

Mass media plays a powerful and obvious role here. In a 2010 experiment, political communications professor Nathan Kalmoe demonstrated a strong link between “mild violent metaphors” like the phrase “fighting for our future” and increased support for real world political violence. While this effect was limited to people with aggressive personality traits — those who are not inclined to violence are unaffected by the messages — there nevertheless is a strong linkage between violent speech and violent acts. One leads to the other in a consistent way. Further, there is strong evidence that President Trump’s bigoted statements about Mexicans causes all people to say more offensive things about not just Mexicans, but against all other ethnic groups.

Social scientists who study this have come to conclusions everyone should find troubling. Research shows that when entire people groups are demonized by society elites, whether in the media or in government, the process of “moral disengagement” makes it easier to commit acts of violence without affecting your own self-image. In other words, because the “other” is bad, doing bad things to them is good. It is how people can get whipped into a violent frenzy by malicious speech: They believe they aren’t committing violence against a person who deserves protection, but rather against a person who deserves violence.

During the Rwandan genocide in 1994, a media outlet called Radio Télévision des Mille Collines, or RTLM, spewed out a steady stream of dehumanizing rhetoric directed at the Tutsi ethnic group. The onslaught of propaganda, which included encouraging violence against Tutsis, was so pervasive, and so effective at spinning up people to want to exterminate their neighbors, that it was nicknamed “Radio Machete.” RTLM did not spring up overnight. It took years of continued dehumanization to prime Rwanda for mass violence.

At the sentencing of RTLM founder Ferdinand Nahimana to life in prison at the UN’s International Criminal Tribunal, presiding judge Navanethem Pillay said, “You were fully aware of the power of words, and you used the radio – the medium of communication with the widest public reach – to disseminate hatred and violence….Without a firearm, machete or any physical weapon, you caused the death of thouds of innocent civilians.” It was a clear connection between the spread of hate speech and the violent actions undertaken by those who heard it.

The fact is that dehumanizing language, repeated to a large audience, works. Referring to humans in dehumanizing ways like “vermin,” and describing a community of people as an “invasion” are a well-trod path toward committing mass murder. And in the case of the El Paso mass murderer, the language he used in his manifesto is disturbingly similar to the dehumanizing rhetoric conservative media have deployed to talk about migrants of Latino origin.

And as more violence is committed, the problem of “contagion” rears its head. The American Medial Association suggests treating violence as “an epidemic health problem,” because it “exhibits the population and individual characteristics of contagious epidemics—clustering, geo-temporal spreading, and person-to-person transmission.” Just since the El Paso shootings alone, dozens more people have been arrested trying to commit mass violence in America. The disease is spreading.

However, while the ways in which mass media rhetoric can contribute to violence are generally known, there is less research about how social media is affecting the operationalization of hate speech. Some early research suggests there is a correlation between hate speech online and real world violence, but to affirmatively establish a causal link there needs to be more research.

People seem to have a vague sense that something is “wrong,” which has led the UN Secretary-General António Guterres to condemn the role social media has played in spreading hate speech earlier this year. Thus, there is an opportunity to try to understand how and why the “churn” of dehumanizing, racist, and other bigoted speech proliferating online has come to be such a pressing concern.

Understanding Hate Speech on Social Media

Social media companies are in a difficult position. Their dominance of the online space gives them the power to shape the discourse by enabling or disabling certain types of discussion — power normally wielded by governments. However, none of them were designed as products to be a replacement for the “public square” in civic life. (Most of them are really platforms for researching consumer behavior to better sell advertisements.) As a result, they are profit-seeking companies where a segment of their customer base is demanding they provide a platform for speech as a public good. It is an inherent contradiction: as companies, they aren’t bound the same legal requirements to allow all (or most) speech types as a government would be, yet it is still something their users are demanding.

The pressure to ban certain types of content is growing, and while it is unrealistic to expect social media companies to eliminate all problematic speech, it is worth noting where and how efforts to limit its reach and impact are working. In some cases, like Twitter’s recent decision to ban state-run media, it is possible to narrowly target potentially malicious accounts in a broad fashion; on an individual basis, however, it is fiendishly difficult, and has no clear answer for moving forward.

Every mainstream social media service has a policy that forbids hate speech. But how that hate speech policy is codified and enforced is constantly up for negotiation — in other words, hate speech on social media is a contested space without clear guardrails or end states. As a result, it is almost impossible for them to make a decision that a broad consensus of users will regard as fair and appropriate.

No social media platform is intentionally hosting hate speech, but the challenges in addressing it in a systemic way have led to frustration and accusations of complicity in violence. These accusations have some merit, like when Facebook famously admitted it had a role to play in an attempted genocide in Myanmar because it did not remove government-disseminated hate speech from its platform. And that’s because of how hate speech behaves as a pathology.

At a fundamental level, hate speech is just as contagious as social violence. In a 2017 study published by the National Academy of Sciences, a team of researchers found that hateful behavior, including hate speech, by a majority ethnic group at a minority ethnic group is more contagious than hateful behavior directed at people of the same ethnicity. Every time an epithet was spoken, or a slight physical act to demean the minority (like a shove) was undertaken, it became much more likely to happen again — and far more likely that similar behavior directed between co-ethnics. This, the researchers suggest, “may help to explain why ethnic hostilities can spread quickly (even in societies with few visible signs of interethnic hatred) and why many countries have adopted hate crime laws.” In other words, hate speech begets more hate speech, which eventually begets violence.

While this sounds bad, the upside to hate speech existing on social media platforms is that it can be monitored and countered. Just recently a man who posted to Facebook about his love of Hitler, Nazis, and Donald Trump (sigh) was arrested when the FBI determined that his threats to murder large numbers of Latinos was credible. He didn’t develop a targeted hatred of Latinos in a vacuum — he consumed media that encouraged a targeted hatred of Latinos. Even so, if he hadn’t been posting those messages to Facebook, then law enforcement might not have been able to catch him before he committed violence.

Counter speech is easier to conduct on social media as well. And indeed, that is Facebook’s explicit strategy for handling hate speech. As a crowd-sourced, seemingly spontaneous response to hatred, it is appealing on a number of fronts: it doesn’t place the onus of responsibility (or the challenge of walking an extremely narrow path to satisfy most users) on the company, it is the sort of freewheeling communal response many idealists prefer, and it doesn’t have to be directed or constrained. But countering speech is not a straightforward process — as Demos, a think tank which conducted counter speech research on behalf of Facebook, discovered, there is nuance to a counter speech content strategy that a spontaneous crowd source cannot deploy, especially when the sources of hate speech are engaged in a targeted and intentional campaign to spread their message. Because hate speech is often coordinated (for example, by a far right party to encourage discrimination against minorities), the counter to that hate speech also needs to be coordinated and targeted in order to be effective.

Social media companies don’t have to have their own proactive responses to hate speech on their platforms, but these line of research does suggest that those who wish to counter hate speech need to be better organized. The asymmetry in responding to hate speech on social platforms is a challenge that can’t fall on the social media companies alone. Simply put, the malicious actors are organized, while those seeking to stop them are not.

Lastly, the definitional problem of hate speech is rarely addressed by critics of social media companies. Where governments have defined the boundaries of hate speech, social media companies are pretty good at complying with those boundaries especially when they’re given the force of law. However, it isn’t Google’s or Facebook’s or Twitter’s job to define exactly what hate speech is, especially in America where being a bigoted jerk to people is protected by the constitution. That being said, the companies do make an effort to exclude hate speech and have official policies saying as much (YouTube is an exception where they seem to have carved out allow hate speech if some broader political argument is made as well). The space created by the official policies and the action taken by leadership at the companies is a major factor in why many people place the onus of enforcement on the companies themselves.

This is a hard problem. You cannot exist as a mainstream internet company in 2019 and have no stance on hate speech, but actually defining the exact boundaries of that speech and then enforcing it is incredibly difficult, one without a straightforward technological solution.

What Happens Now?

So to come back around to Brievik: the networks in which people are being radicalized by the far right are maturing into a global machine. Their adherents are operationalizing their speech in a way they simply weren’t eight years ago when Brievik committed his massacre.

I mentioned the role Facebook played in the ethnic violence in Myanmar above. WhatsApp, one of their subsidiary companies, has also been implicated in mass hysteria and violence in India. There is conclusive evidence that when hate speech is left unchallenged on the internet it eventually leads to violence. In 2011, it was unclear that hate speech online really mattered all that much — did it really matter that Brievik was subsumed in a community that engaged in bigoted speech against Muslims and non-white people? If so, then why did he murder Norwegian children, instead of the targets of his hate speech? But now, in 2019, there is much more research establishing a link between the engagement with hate speech and the decision to commit violence. It is a strong correlation, whether you’re in the U.S., or India, or Myanmar, or Nigeria, or Turkey, or Europe.

Unfortunately, there isn’t an easy solution to this challenge — it is a contested space. Protected speech does not have clear boundaries, and the definition of hate speech varies from country to country even as that language crosses borders. The obligation of governments, companies, and individual users to respond to it is also unclear. No one wants to see hate speech define the experience of being on the internet, but it remains unclear what the most effective and rights-protective way is of achieving that state.

One place we can begin to rethink how to address hate speech is by acknowledging this complexity. “It’s complicated” is not the most popular response to a problem on the internet, but ignoring how difficult this challenge is will lead to bad solutions that might make things worse. If this were easy, it would have been solved by now.

I floated an idea my piece about how and why the President’s rhetoric on immigration has real world consequences: the responsibility to speak up. Counter speech to identify and respond to hate speech, can be effective in blunting its impact. Of course, there will always be a contest of that counter speech (consider Fox News host Tucker Carlson asserting that white supremacy is a hoax the week after a white supremacist murdered 22 people for being non-white). But that is the nature of contested spaces — they are contested. It means that unless there is a response and pushback against the hate speech, that hate speech will dominate the platform. We have a responsibility to speak up against hate speech.

One thing in American society that can help on this front is that no one wants to be acknowledged as a racist. People in fact hate it, even those who are plainly spoken racists who freely use hate speech. There is absolutely a danger in using the term too loosely, but it remains a powerful one to deploy. A direct confrontation with someone over their hate speech might not be constructive should they be called racist right out of the gate. But there is a constructive (though emotionally laborious) path toward addressing bigoted speech through dialogue that steadfastly challenges racist beliefs.

This is an approach that takes time and commitment. One of the reasons why hate speech has proliferated online is that the majority of people who do not like it have not confronted it. While it’s easy to complain about how a platform is a “cesspool,” there hasn’t been an established norm that bigotry is unacceptable. The silence of assuming it isn’t your problem is a form of complicity, but it’s one that can be reversed by simply saying “no, this is not acceptable.”

Putting the onus on us, the users, to police the space we use online is unsatisfying. Yet, the alternative is even worse. Companies are by nature conservative — they are slow to act, and tend to do so in a minimal way when revenue is on the line. It is unrealistic to expect them to solve this problem on their own — we have to take responsibility for the sort of language we will tolerate, whether online or in more traditional media. It won’t be easy, and there will be extremely disingenuous pushback against it. But unless we collectively decide to act, the racists will win.5

comments powered by Disqus