
The study “ “Journalism Will Always Need Journalists.” The Perceived Impact of AI on Journalism Authority in Switzerland” by Laura Amigo and Colin Porlezza from Università della Svizzera italiana in Lugano, Switzerland, looked at the adoption of generative AI in Swiss newsrooms and how journalists maintain their journalistic authority and use AI responsibly. Like in our recent article about a similar topic in the Global South, this study also interviewed the journalists for their perceptions.
Journalism faces pressure from greater access to generative AI systems, as found in previous studies (Brundage et al. 2018), with Switzerland being no exception to these trends. Switzerland is a small country with a limited market with a fragmented media landscape due to multilingualism. It is also characterized by strong foreign media influence, being located in the heart of Europe – and the influence occurs also on the editorial level.
Another trend in the Swiss media market is the growing disaffection, news deprivation and increasing distrust towards journalists and the news media, as evidenced by the trust in media being 41% in 2024 compared to 50% in 2016. AI also poses ethical challenges, and the authority of journalism is increasingly called into question.
Journalistic authority, as per Carlson (2017) can be understood as “a social construction of the right to be listened to, a notion derived from relations among a fluctuating set of actors, which allows the news to be believed and legitimated, and thus it enables journalism to exist” Swiss audience is sceptical about AI as it is believed that it might lead to “increased proliferation of misinformation in reporting” (Vogler et al. 2023).
Therefore, this study asks the question “What is the perception among journalists regarding AI’s impact on the authority of journalism in Switzerland?”, and “What can be done to ensure a responsible use of AI in Swiss newsrooms?”.
Interviews were conducted to explore the issue. First, there was a focus group that lasted 100 min with five participants from Italian-speaking Switzerland, and this was complemented by 11 semi-structured interviews of an average 40 min, with 2 journalists from German-speaking regions and 9 from French-speaking regions. The participants had different positions in different outlets.
The authors justify looking at the Swiss context due to the audience’s trust in news media being still comparatively high (Udris and Eisenegger 2024), as is their perception of the quality of news production (Medienqualität Schweiz 2024), while AI is seen critically (Vogler et al. 2023). This makes the case study interesting.
Respondents highlighted the fierce competition for audience attention, with journalism losing its dominant role as a gatekeeper. One radio journalist for a French-speaking channel lamented the fact that people constantly question the words of a journalist. Another from public TV stated that there is a growing number of people who believe in some sort of a conspiracy ‘They’re not telling us the truth’.
AI was seen as making the situation even more dangerous. Many described AI as a driver of disinformation and making verification and fact-checking harder. There is a paradox: at a time when journalism’s authority is increasingly questioned, the spread of dis- and misinformation make the job even more essential. The journalists were fairly sceptical about using AI for content production, as audiences may perceive it negatively and there is a risk of errors.
Generally the approach to AI was cautious. The journalists acknowledged that they may make mistakes, but stated that AI was not free from the risk of errors, either. The human element – being able to be out in the field, to reach out and meet people, to interpret non-verbal communication, and to understand humor – was considered essential. They also found AI:s output to be less interesting than human-produced stories.
For responsible use of AI, the authors categorized the answers from the participants into four broad principles. First, “journalists claimed that it would be beneficial to establish common standards in their own news organizations regulating the use of AI thereby putting all collaborators on an equal basis”. Only one was in an organization with clear guidelines. “Second, journalists believed that transparency regarding the use of AI in newsmaking could help reinforce audiences’ trust and understanding of their work”. This means labeling content produced by AI. The third principle was the critical use of technology, and being open about it. Fourth, media and AI literacy and training was seen as essential.
In conclusion, the Swiss journalists saw opportunities and challenges, being somewhat critical about AI. They were well aware of AI’s disruptive potential and believed that AI should be kept under control by human oversight and governance. The journalists emphasized emotional agency as being distinct from machines and being a way to increase authority, but the authors are uncertain about the implications of emotionalization of journalism.
The article “ “Journalism Will Always Need Journalists.” The Perceived Impact of AI on Journalism Authority in Switzerland” by Laura Amigo and Colin Porlezza is in Journalism Practice. (Open access).
Picture: when in Switzerland, on my way to Titlis Mount. by Tron Le.
License Unsplash.




