ARTICLE: Automated news recommendation can be as good as human editors

Untitled by Free-Photos, licence CC0 1.0

Is automated news recommendation a threat to content diversity, leading to audience getting trapped inside “filter bubbles”? Judith Möller, Damian Trilling and Natali Helberger, all of University of Amsterdam, with Bram van Es, of eScience Center, investigated the question through a statistical simulation.

The authors used a sample of almost 22 000 news articles published by the Dutch newspaper de Volkskrant. They applied five different methods of automated news recommendation to 1 000 articles and analysed the recommendations given in conjunction with the thousand “origin articles”. These automated recommendations were also compared to the recommendations made by de Volkskrant‘s editors and to a purely random recommendations.

The team measured the recommendations’ diversity on four dimensions: topic distance, category and tag distance, political relevance (either high or low), and tone distance. On these measures, randomization -as expected- produced more diverse recommendations than most other methods. Interestingly, the editors’ picks were not particularly diverse, and were quite similar to most automated recommendation methods.

One recommendation system stood out as superior: the user-collaborative algorithm produced most diverse recommendations (outperforming human editors, and sometimes even random recommendations). This method combines the data on individual audience members’ preferences and the preferences of all readers.

While automated recommendation systems are able to perform (diversity-wise) as well as or better than human editors, they are not perfect, the authors note. The recommendations’ will follow the general distribution of available stories, which makes the less-common article types more difficult to discover. If the recommendation algorithm learns from audience behavior, it will over time push the more marginal content further into the fringes: what is recommended rarely, gets read rarely, and thus gets recommended ever more rarely. Thus recommendation algorithms should be designed to protect the “long tail”, the authors conclude.

The article “Do not blame it on the algorithm” was published by the journal Information, Communication & Society. It is freely available online on the publisher’s website (open access).

Picture: Untitled by Free-Photos, licence CC0 1.0

Give us feedback