An international team of researchers has been working on software for automated verification of information circulating on Twitter. The team observed 15 Swiss journalists to find out how they utilize and verify user generated content (UGC) in their work. Later on the team’s software was tested and evaluated by two journalists.
The study’s main finding was the variability of UGC use situations: one, rigid set of functions cannot accommodate all needs. For example, the initial 10-minute time lag for the analysis of live Twitter data was, according to journalists, too much for breaking news situations. For this kind of use, the software should be streamlined for speed and easy use. On the other hand, the software’s more elaborate functionalities are useful for more in-depth investigations.
A paper detailing the ethnographic study and subsequent software trials was presented at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (ACM CHI ’17). Entitled “Supporting the Use of User Generated Content in Journalistic Practice”, the paper was authored by:
- Peter Tolmie, of University of Nottingham and University of Warwick
- Rob Procter, of University of Warwick
- David William Randall, of University of Siegen
- Mark Rouncefield, of Lancaster University
- Christian Burger, of SwissInfo.ch
- Geraldine Wong Sak Hoi, of SwissInfo.ch
- Arkaitz Zubiaga, of University of Warwick
- Maria Liakata, of University of Warwick
The paper is freely available online from the ACM Digital Library (open access). The research project’s website can be found here.
Picture: Untitled by jarmoluk, licence CC0 1.0.