Skip to content
Frontpage

JOURNALISM RESEARCH NEWS

Article: Negative Feedback Fuels Hate Speech: A Deep Learning Analysis of 25 Million News Comments

The study “Negative Feedback Fuels Hate Speech: A Deep Learning Analysis of 25 Million News Comments” by Hyo-sun Ryu and Jae Kook Lee from Sungkyunkwan University investigated the dynamics of hate speech by looking at 25 million comments during the 2022 South Korean presidential election by using deep learning hate speech detection software. 

Hate speech is a serious and growing issue in online platforms on a global scale. It may influence viewers to have negative perceptions of the targeted groups and is closely related to violence against certain groups. 

Previous research has demonstrated that the logics of social media favor uncivil comments, as they tend to receive more recommendations compared to those without incivility (Muddiman & Stroud 2017). This is contested by some studies but the point remains that incivility tends to drive engagement. The impact of comment feedback has not been yet investigated. 

Since hate speech is a tool to promote the in-group and denigrate the out-groups, the authors expect hate speakers to be sensitive to public opinion for and against them and value the reactions of watchers. 

In this study, then, the authors looked at how feedback to comments influences subsequent behavior. For this purpose the 25 million comments mentioned earlier were collected and analyzed. 

There were seven hypotheses: first that if individuals receive positive feedback they will post more comments online. Second that commenters will increase non-hateful commenting when they receive positive feedback to their non-hateful comments, and conversely and thirdly, will decrease non-hateful commenting when they receive more negative feedback on their non-hateful comments. For hateful commenting, the fourth and the fifth hypothesis stated that commenters will increase hateful commenting both as a response to positive and negative feedback. 

The sixth hypothesis had two parts: that the effect of positive feedback on hateful commenting is more significant for light commenters of hate speech than for medium, and more significant for medium than heavy commenters. The seventh hypothesis mirrored this, but only for negative feedback.  

There were examples of hateful and non-hateful comments in the study, confirming that the deep learning tool was efficient in classification. The findings suggest that commenters generally take feedback seriously. For non-hateful commenting, commenters increased their subsequent commenting based on positive feedback and decreased when encountering negative feedback. 

Positive feedback on hateful commenting, on the other hand, had no effect, but negative feedback increased subsequent hateful commenting, and the effect was stronger with lighter commenters. Thus, the positive relationship between positive feedback and commenting only existed with non-hateful commenting – contrary to some hypotheses.

It is speculated that when comments serve as medium for attacking others, commenters are less likely to respond to positive feedback. Then again, hateful commenters are not distressed by negative feedback – instead thriving on it. The authors speculate whether this is an indicator of the size of audience, but if so, then positive upvotes would also increase subsequent hateful commenting, and it does not do so.

Again, those who were light commenters were more susceptible to the effect of increasing hateful commenting with negative feedback, while heavy commenters were not sensitive to others’ feedback – confirming Brady et al. (2021) finding that the politically extreme are less influenced by public sentiment. 

The authors still see value in negative feedback towards hateful comments, even if it did not reduce it and thus seemingly calls into question the effectiveness of voluntary feedback systems. For one, the feedback is not read just by the commenter but also the audience following the interaction. It may then function as a marker of societal rejection of hate speech. 

Some limitations to the study were acknowledged. One is that approximately 3,2% of the sampled comments were considered inappropriate and hidden by the administrator of the platform, Naver News. These likely have contained hare speech but users were unable to interact with it – leading to a possibility that hate speech was underrepresented in the analysis. Other platforms may have different dynamics.

The article “Negative Feedback Fuels Hate Speech: A Deep Learning Analysis of 25 Million News Comments” by Hyo-sun Ryu and Jae Kook Lee is in Journalism & Mass Communication Quarterly. (Free abstract).

Picture: Untitled by Towfiqu barbhuiya.

License Unsplash.