The volume of inaccurate or false information is too much to filter by designated fact-checkers, Gordon Pennycook and David G. Rand, both of Yale University, argue. Could the task, then, be crowdsourced? Pennycook and Rand conducted an online experiment with 1 010 Americans by asking them to rate the trustworthiness of 60 news sources.
The results were resounding: participants ranked mainstream news sources decisively more trustworthy than “hyperpartisan” or “fake news” sources. There was a slight difference between Republican-leaning and Democrat-leaning participants: the former ranked fake news and hyperpartisan sources higher than the latter, while the opposite was true for mainstream news. The difference was still small, and overall both leanings found mainstream news more trustworthy.
Pennycook and Rand also wanted to find out whether the rankings are more reliable, when the participants know the sources. As part of the experiment, the participants were asked to indicate how familiar they are with each news source. Then the authors compared the rankings made by all participants to the rankings of only those familiar with the sources.
The result was “counter-intuitive”, Pennycook and Rand write: excluding the rankings of people unfamiliar with the sources made the trust gap between mainstream news sources and other sources diminish. People who are familiar with a source are also more likely to trust that source – even in the case of fake news websites, the authors noticed. Crowdsourced trustworthiness evaluations should thus not rely on only those who claim knowledge of the source in question.
The unpublished manuscript “Crowdsourcing Judgments of News Source Quality” has been made available through the Social Science Research Network’s eLibrary (open access).
Picture: Untitled by Arek Socha, licence CC0 1.0.