University of Zürich

04/18/2024 | News release | Distributed by Public on 04/18/2024 01:31

Less Hate Online

  1. Home
  2. All articles
  3. Less Hate Online
18.04.2024Political Science

Less Hate Online

When wars break out in places such as Gaza or Ukraine, hate comments have a field day. They poison the debate on the internet and are a threat to democracy. Political scientist Karsten Donnay is looking into how to also establish social norms online.
Text: Andres Eberhard, Translation: Michael Craig, Illustration: Benjamin Güdel
Hate comments on the internet are not just annoying - they also endanger democracy. Illustration: Benjamin Güdel

You've probably had the experience of reading the comments on an online article and coming across something highly offensive from a reader - maybe racist, anti-Semitic or otherwise disparaging of a certain group. You may shake your head. Perhaps you express your disapproval with a thumbs-down. But there's a good chance that you'll simply scroll on in annoyance, perhaps in the hope that someone will delete the offensive content.

Hate comments are more than just a nuisance. They poison the social debate and have the potential to destabilize a society. "They're a serious threat to democracy," says Karsten Donnay, assistant professor at the Department of Political Science at the University of Zurich. Especially when conflicts or wars break out as in Israel or Ukraine, the social media discourse becomes emotionally charged. Insults, hatred and misinformation are commonplace, and politicians fuel the fire instead of calming it down. Dynamics like this can regularly be observed around major geopolitical events such as the US presidential election.

Etiquette for the internet

Many operators of online platforms respond to hate speech by deleting comments or even user accounts. But Karsten Donnay is convinced that this kind of "deplatforming" is the wrong approach. If their content is blocked, users can simply propagate it elsewhere, for example on Donald Trump's Truth Social instead of on X. "It's not a good idea to exclude people from the conversation," says Donnay. What's more, large platforms such as Facebook aren't able to remove more than around five percent of hate comments.

But there are alternatives to deleting content. Instead of getting angry and scrolling on, you can also respond to hate comments to try and make hate speakers think about their behavior and moderate their comments in the future. Such targeted counterspeech also helps establish social norms on the internet. After all, just as in real life, certain rules of conduct should also apply online - rules that aren't written down anywhere but which we intuitively follow to live together peacefully. It would hardly occur to someone to insult other dinner guests because of their age, origin or sexual preferences. But in online discussions, such insults are almost par for the course.

Russian troll factories

Researchers at UZH and ETH have been able to prove in a number of joint studies that the strategy of counterspeech actually works. They show that hate speakers in comment columns and social media post significantly less hate speech or even delete their posts if they're met by targeted counterspeech. But for this to work, you have to choose the right words (see box on counterspeech). The researchers speak of hate speech when the degrading and offensive messages are directed against a specific group. It doesn't include insults directed at individuals.

The effect that counterspeech can have is comparatively small. Only a few of the hundreds of hate messages were deleted after targeted counterspeech. This is probably because only a small proportion of the authors of these comments can be made to think by counterspeech. The majority of hate speakers are probably trolls expressing their anger and resentment for fun, money or political gain. A well-known example is the Russian troll factories that managed to influence the last US presidential election.

Donnay and his colleagues also came across trolls in their investigations. It's not implausible that such verbal firestarters are behind a large part of hate speech. "We know from our studies that only a very small number of people - around 5 percent of the authors of hate speech - are responsible for well over 80 percent of hate messages," says Donnay.

Despite its fairly limited impact, the effect of targeted counterspeech shouldn't be underestimated. After all, it's not just about reforming the hate speakers themselves, says the political scientist. "You also have to consider the many people who read the comment columns. Counterspeech makes it clear to these people too what is and isn't acceptable." Such secondary effects could help to establish social norms for more positive coexistence.

The business of disinformation

In addition to counter-speech, regulations are also needed to stop the spread of hate and disinformation online. "The fact that hate and disinformation are a business has something to do with the structures," explains Donnay. These need to be changed. Forums and platforms that already foster a civilized culture of discussion can serve as role models. This works because debates in the community are moderated by individual users. In return, they're rewarded with titles acknowledging them as particularly experienced community members or their contributions are made more visible by algorithms.

We know from our studies that around 5 percent of the authors of hate speech are responsible for well over 80 percent of hate messages.

Karsten Donnay
Political Scientist

To rid social media of hate and disinformation, Donnay also envisions an ethical self-regulatory body like the press councils that already exist for traditional media. Minimum standards should also be insisted on. For example, the new social media platform Bluesky doesn't feature a direct messaging function, which at competitor X is used to spread a lot of hate. Also, anyone who wants to open an account on Bluesky needs an invitation, which makes it harder for trolls. So far, they are hardly present on the platform.

Recognizing hate messages with AI

It's still uncertain how the increasing prevalence of AI technologies will affect hate speech on the internet. On the one hand, it could increase because tools such as ChatGPT make it easier to write and disseminate hate posts. On the other, AI also helps to identify and combat hate comments online. Karsten Donnay was able to demonstrate this in a joint research project with his UZH colleague Fabrizio Gilardi and Dominik Hangartner at ETH Zurich. Under the leadership of postdoctoral researcher Ana Kotarcic, the team developed the first algorithm based on deep learning that recognizes hate comments containing Swiss-German terms. Bot Dog, as the algorithm is called, also speaks French.

Tests have shown that the algorithm already recognizes hate almost as well as humans. And, most notably, it does it faster and therefore more cheaply. The researchers are currently working on further improvements in recognizing and dealing with hate speech. The project has now been transferred to a new non-profit body, the Public Discourse Foundation. Its aim is to investigate and strengthen public discourse on the internet.

Karsten Donnay is hopeful about the emergence of artificial intelligence for another reason as well: it has heightened awareness. "Now it's become clear to everyone that we finally have to address the question of what kind of internet we actually want to live with. It won't turn out well if we don't actively steer developments."

Text: Andres Eberhard, Translation: Michael Craig, Illustration: Benjamin Güdel

Counterspeech

How to respond to hate messages

An alternative to deleting hate messages on social media is targeted counterspeech. In recent years, as part of the Stop Hate Speech project, researchers at UZH and ETH have been looking into what words help make hate speakers change their behavior. Their finding is that messages that show empathy for the group affected by hate are the most effective. As a result, hate speakers are less likely to spread derogatory messages or even delete them. You can counter hate speech with sentences like the following sentences:

• Talking about the person or group concerned is unnecessarily painful for them.
• How would you feel if people talked about you like that?
• Ever thought about what it means to have to leave your entire home behind because you have to flee?
• How would you feel if you were reduced to your appearance?

What doesn't work against hate speech, on the other hand, are humorous reactions, for example memes or warnings like "Hey! You know your friends and family are going to read this too, right?"
Activists often use responses that refer to facts ("Only 0.8% of the Swiss population are asylum seekers"), that emphasize positivity ("Love is stronger than hate"), that call out hate speech ("Hate is not an opinion"), that warn of offline consequences ("Prohibited by criminal law"), or that moralize or expose contradictions. However, whether these strategies work hasn't yet been sufficiently researched.

Weiterführende Informationen

More information