European Federation of Journalists

Feature: Media literacy tools in the age of AI-generated disinformation


AI-generated disinformation is a growing concern, especially in an age of general mistrust in the news. Disinformation continues to spread on social media platforms and raises various problems: the influence of malicious actors such as bots; creation of memes, videos or other visual content with the intention of changing opinions; generation of incorrect text that appears convincing, and more.

Young people are generally enthusiastic about using Artificial Intelligence (AI) as a UN report found a high level of trust in AI and a positive attitude. However, most do not deeply understand how it works. At the same time, German magazine “Bild” announced that they will reduce their editorial staff and increase dependencies on AI. News organisations such as Buzzfeed, Daily Express and others are exploring ways that the new technology can be used by, or instead of, journalists.

“Whether AI will have a positive or negative influence on journalism and society depends significantly on how quickly and decisively policymakers respond to these new possibilities,” said EFJ German affiliate, DJV, in a statement.

The public availability of generative tools such as ChatGPT-3 spearheaded the recent AI boom. This brought to light critical concerns about the potential (mis)use of such systems by bad actors deliberately intending to spread disinformation through influence campaigns.

It is therefore essential that policymakers actively further general education on AI and media literacy, support funding for research and support for innovation of journalism-based AI. The European AI, Data and Robotics Community should discuss with journalists’ unions and associations to understand the specific needs of newsrooms in the design and development of AI generative tools.

Thus, a cooperative approach is required among different institutions, including AI developers, social media platforms, government agencies and media publishers and editors. It is the responsibility of newsrooms to ensure that AI tools are used with journalistic standards in mind, such as marking content that has been generated with AI. Such solutions would help ensure users have the critical skills to distinguish between journalistic content and disinformation.

For example, self-regulation organisations, such as the French media council CDJM, have already issued ethical guidelines (in French) on how to use AI in newsrooms. They differentiate between low- and moderate-risk uses, as well as prohibited uses, and tailor ethical guidelines depending on the use. Additionally, some generative AI services, such as ChatGPT, do have policies and safeguards against generating misinformation.

A report published in January 2023, published by OpenAI, explains how users can produce and distribute false or propagandist content with easily-accessible generative-AI tools aimed at influencing the opinions and behaviours of the target audience. The report is a collaboration with 30 disinformation researchers, machine learning experts and policy analysts.

The generative AI boom puts powerful, accessible AI tools in the hands of everyone, including bad actors. Therefore, it is essential to view media literacy as the constellation of tools and techniques that allow users to locate, interpret and evaluate a variety of media. It facilitates new ways through which journalists create media that play a civic role within our democracies.  

This article is part of the E-engaged project funded by the European Commission under the CERV grant. It was originally posted on the project website.