
The study “ChatGPT, Generative AI, and an Epistemic Opportunity for Journalistic Authority” by Gregory P. Perreault from University of South Florida and Seth C. Lewis & Maxwell Ely from University of Oregon used journalistic authority as a conceptual framework to examine how journalists respond to the challenges posed by AI in the newsroom, and conceptualize the epistemic crises as epistemic opportunities.
Steven Brill from Newsguard, not a novice when it comes to journalistic falsehoods, argues that ChatGPT presents an unique threat when it comes to false narratives, as the things written by it come across as legitimate or even authoritative when it is weaponized in wrong hands to produce disinformation.
However, journalists in the United States, as evidenced by a discourse analysis of 21 interviews by the journalists and 94 metajournalistic units of metadiscourse, feel that artificial intelligence can still be leveraged to improve their work rather than to undermine or replace it.
In addition to the aforementioned interviews, the study involved textual data coming from November 30 2022 to March 31 2023 corpus collected from English-language newspapers and magazines using the term “ChatGPT”, further winnowing the search to those in which journalists reflected on using ChatGPT for journalism.
Journalists constructed their journalistic authority in relation to ChatGPT by positioning themselves as authorities of the technology while at the same time casting skepticism on the validity of chatbots more broadly. They used ChatGPT as a launch point for other generative intelligence tools and generally perceived it as an opportunity to maximize their SEO and engagement.
They were keen to emphasize their credentials when it comes to AI – personalizing their journalistic authority when it comes to AI. They also downplayed the authority of the social and technological threat in question – AI, as consistent with previous research on journalistic authority by Zelizer (1990).
ChatGPT was presented as a tool that can be utilized for the “greater good” and journalists largely had a positive evaluation of ChatGPT. For news production, this meant that journalists took pride in their work and perceived ChatGPT only as a threat to poor newswork – being critical of the ability of ChatGPT to produce quality newswork and unevocative writing.
Thus, the threat of the epistemic crisis was mitigated by the belief that only human journalists could produce the kind of “high journalism” aspired by professionals and desired by the audiences. The authors do caution that journalists may be inclined to project stability and authority externally. They also note that the results in contexts of other languages or other countries than the US might be different.
The article “ChatGPT, Generative AI, and an Epistemic Opportunity for Journalistic Authority” by Gregory P. Perreault and Seth C. Lewis & Maxwell Ely is in Digital Journalism. (Free abstract).
Picture: ChatGPT by Jonathan Kemper.
License Unsplash.




