European Federation of Journalists

How to respond to the challenges of artificial intelligence in the newsroom?

Credits: Canva

The European Federation of Journalists (EFJ) calls on public authorities not to regulate the use of artificial intelligence in newsrooms, but to promote its ethical use. The use of algorithms should preserve media pluralism and freedom of expression, especially for minority groups. The introduction of artificial intelligence in the media also requires investment in training for journalists.

On 11 and 12 May, the Online Conference “Artificial intelligence and the future of journalism: will artificial intelligence take hold of the fourth estate?” took place, joined by more than 300 people around the world. Among the speakers was the president of the EFJ, Mogens Blicher Bjerregård, who with further high-level political actors discussed policies and AI-related opportunities and challenges for the future of journalism.

The online event, which was organised in the context of the Portuguese Presidency of the Council of the European Union until 30 June 2021, hosted five main sessions. In the last panel on Public Policy Challenges for the future of journalism, Mogens Blicher Bjerregård spoke next to José Magalhães, Deputy, Member of the Committee on Culture and Media, Portuguese Parliament; Nuno Artur Silva, Secretary of State for Cinema, Audiovisual and Media; and Věra Jourová, EC Vice-President for Values and Transparency. The panel was moderated by Christophe Leclercq, Chairman of Europe’s MediaLab. 

Should AI be considered a threat or an opportunity for the media sector? Whatever the answer to this question, we can consider that AI is probably here to stay. Mogens Blicher Bjerregård shed light on what he sees as the most pressing issues. First, when implementing AI in the newsroom, there is a risk to further extend the gap between big and small media. While big media companies may have the resources to invest in new technologies, small and local media could be left behind, which is why they need particular support. Further challenges concern ethics and bias behind algorithms and a need for journalistic training on data literacy, where often freelance journalists are not eligible for training. Skills and education must be made available for more journalists. 

The tech giants are born digital and will always be in front of new technology. But they won’t care about ethics and the role of journalism and neither will their algorithms. Therefore, it is important that media and journalists will be skilled to make their own algorithms taking ethics and the public good of journalism into account.”

The discussion also revolved around the question of how media can be supported without interference from governments. Nuno Artur Silva suggested that media can be supported via subsidising quality human journalism, increasing the training and skills of journalists, supporting change through management, and investing in technologies.

But it is vital that the media’s independence is ensured. For this, according to Bjerregård, self-regulation is key. Further recommendations on how to counter potential job cuts due to AI and to adapt to a changing environment are professional journalistic practices, where citizens get a wide variety of content delivered in a transparent manner, and improved media and digital literacy.

“We need algorithmic ethics and not algorithmic regulation. Because in the second case, we would lose journalistic jobs. Support of media equals support for democracy,” Bjerregård emphasised. 

Věra Jourová asserted that Europe is ready to support innovations in the media, improve media workers’ skills and make financial support as practical and useful as possible. This should be understood in the context that many media workers are not trained to apply for EU funding, therefore a decentralised funding mechanism could be helpful. While censorship – neither by states, platforms nor AI – cannot be the answer to the growing issue of disinformation, Jourová suggested that fact-checking and possibly sanctions can help to tackle the problem of harmful content.

José Magalhães further stressed that although the state should not become an entity that censors social networks, it is still extremely important to apply rule of law to the media. Magalhães also addressed the fear of whether technology and AI would replace humans in the workplace, but argued that these technologies cannot replace journalists or take autonomous decisions, and are only existing to help humans, doing tasks such as translation, research, or synthesising long texts.

The High Level Conference ended with a wrap-up from Gustavo Cardoso, Director of OberCom in Portugal. He further emphasized that as AI is built by humans, all IT tools and algorithms are infused with human values and therefore become biased by default. Journalists have a responsibility to imbed journalistic values in AI. Furthermore, when it comes to the future of AI and journalism, human training will be central to the success of AI, and ethics, such as identifying who produced AI and who has ownership of the content that the AI produced, remains grey areas to be discussed. Overall, we have to acknowledge that the way we communicate has changed dramatically and AI could have an undeniable impact on our lives in a not-so-distant future:

“AI can help journalism but also the everyday life of citizens. AI will not solve all problems, but it can help make a difference.”