
The study “Balancing Artificial Intelligence and Human Expertise: Ideal Fact-Checking Strategies for Hard and Soft News” by Yunju Kim and Joonhwan Lee from Seoul National University compared the effectiveness of Artificial Intelligence (AI)-aided versus human fact checking for two different types of news: divided here into hard and soft news both in Korea and the United States.
The potential of AI for debunking fake news has been found in a recent study by Moon et al. (2023), and according to Sundar (2008) people often assume that news stories selected or composed by AI are free of bias. Hence, there is research suggesting that AI may indeed be effective as a fact-checker and people would trust it in that capacity.
The defining characteristics of “hard news” and “soft news” are not universal and generally agreed upon. Typically, hard news focuses on factual data about events that are considered to be relevant and interesting to a wide audience. In contrast, soft news centers on human interest stories.
The authors note that the success of AI-driven fact-checking hinges on the type of news. This is supported by previous studies, such as Wu (2020) who found out that AI had higher scores in objectivity and credibility than humans in sports news, whereas they were lower for financial news.
Tandoc et al. (2020) discovered that the source and message credibility of an algorithm were significantly undermined when the story was not objective – people were particularly harsh on AI if it did not fulfill the expectation of objectivity, which may be a case of an effect stemming from expectation violation theory (Burgess 1978).
Indeed, the previous research then indeed suggests that AI might be more suited to fact-checking hard news and humans soft news, owing to the disparity between emotion and cognition – with soft news requiring subjective, emotional reasoning. This is why the authors make the hypothesis 1: “The impact of AI’s fact-checking will be greater for hard news compared to soft news, such that the reduction in credibility for an article due to AI’s fact-checking will be more pronounced for hard news than for soft news.”
The corollary of this is hypothesis 2: “The impact of human experts’ fact-checking will be greater for soft news compared to hard news, such that the reduction in credibility for an article due to human experts’ fact-checking will be more pronounced for soft news than for hard news.”
Following this, there are two other hypotheses:
“H3: AI’s fact-checking of hard news will have an impact on the decline of credibility for an article through the central route (e.g., decreased argument quality).
H4: Human experts’ fact-checking of soft news will have an impact on the decline of credibility for an article through the peripheral route (e.g., decreased source liking).”
The authors tested these hypotheses with empirical data, employing a 2 x 2 between-group factorial design (hard news vs soft news, and fact-checker: AI vs human experts). A participant was then assigned to one of the possible four groups, with roughly 150 participants in each group both in the Korean groups and the United States groups, for a total of 1210 participants.
There were three different measurements that were asked: news credibility, perceived argument quality, and source liking, where people were asked to gauge their opinions on the journalist that had written the story.
Both H1 and H2 were supported – the decrease in credibility was more pronounced in hard news when checked by AI and soft news checked by humans. This was true of both the Korean and the US group.
For the Korean group, H3 and H4 showed that AI’s fact-checking operated through systematic processing (H3) decreasing argument quality, while human fact-checking of soft news worked through both systematic and heuristic processing decreasing argument quality and source liking (H4).
In contrast, the US data suggested that both AI checking of hard news and human checking of soft news both worked through both systematic and heuristic processing. The AI’s effect on hard news is most pronounced in perceived argument quality reduction.
In conclusion, the study found consistent proof for the influence of AI’s fact-checking being prominent in hard news and that human fact-checking was similarly prominently influential in soft news. The results lend credence to the line of reasoning of dual processing, where AI’s effect worked through cognition-based systematic processing, and the human effect through emotion-based heuristic processing. The results were generalizable across topics and national contexts.
The article “Balancing Artificial Intelligence and Human Expertise: Ideal Fact-Checking Strategies for Hard and Soft News” by Yunju Kim and Joonhwan Lee is in Journalism & Mass Communication Quarterly. (Free abstract).
Picture: Untitled by Nejc Soklič.
License Unsplash.




