Technische Universität Berlin

03/25/2024 | Press release | Distributed by Public on 03/25/2024 06:50

Working Together Against Disinformation on the Net

The rise of fake news, disinformation, and hate speech on the Internet resulting from the rapid development of artificial intelligence represents a growing threat to democracy. The hope that artificial intelligence alone could remedy this by automatically learning to identify false information has proven a false one. Even today, machine-learned models are not able to use their training data - which by definition depicts the past - to sufficiently generalize about the ever-changing facts and concepts of the rapidly evolving media landscape. As a result, it is not possible to distinguish with sufficient reliability between disinformation and true statements using artificial intelligence alone. This creates a fundamental problem when it comes to fact-checking: Firstly, there aren't enough journalists to check the increasing volume of disinformation, and secondly, we do not have the machine systems to solve the problem, a situation that is unlikely to change anytime soon.

Human-machine collaboration

The project "AI-supported assistance system for crowdsourcing-based detection of disinformation spread via digital platforms (noFake)," funded by the Federal Ministry of Education and Research, combines the skills of humans and machines to tackle these challenges. The new fact-checking community "CORRECTIV.Faktenforum," established by the non-profit media company CORRECTIV and which has been gradually opened up to users since the beginning of the year, aims to provide an online home for citizen journalism. It offers a platform for members of the public to participate in verifying factual claims and will be actively supported by the AI tools of the noFake project partners.

Using the strengths of AI

Innovative machine learning methods for human-machine collaboration are being developed for this purpose under the leadership of TU Berlin's Professor Dr. Dorothea Kolossa. Her team of researchers is focusing on multi-modal content, including the recognition of artificially generated images and texts, and the search for existing fact checks on new reports. Among the things they have developed is a system that can recognize with 99.5% accuracy which images have been computer-generated using current diffusion models, such as Stable Diffusion and Midjourney, and which are authentic, as well as clearly distinguish between different image generation models. These tools are now gradually being incorporated into CORRECTIV.Faktenforum and will be made available to citizen journalists, together with explanations of the respective decision-making bases. This will enable them to learn how to check for possible misinformation using AI and their own research skills. For Dorothea Kolossa, this is something we urgently need, as current developments in artificial intelligence, especially with generative models such as ChatGPT for text and Stable Diffusion for images, threaten us with a flood of false information. The noFake project, on the other hand, aims to use the strengths of AI to redress the imbalance between the rapid and easy generation of disinformation and the hard work involved in providing reliable information.

Strengthening the media skills of the public

"For me, our idea of using AI to improve people's ability to recognize misinformation is particularly important. Unlike many current approaches, which look to replace human skills with automated tools, our aim is to promote the media literacy of members of the public and citizen journalists. I see using AI as a tool to gain knowledge as the right and responsible way forward into a future with ever more powerful machine learning tools, not only in the area of citizen journalism, but also in many other fields such as medicine," says Professor Kolossa, whose expertise in the field of natural language processing has won her several awards in recent years. For the noFake project, she is working together with Professor Dr. Tatjana Scheffler and her team of forensic linguists at the Ruhr University Bochum as well as an interdisciplinary consortium consisting of journalists, fact checkers, and software developers from CORRECTIV gGmbH. The project team is completed by a group of media and Internet law experts from TU Dortmund led by Professor Dr. Tobias Gostomzyk.

Strengthening the culture of debate in society

The noFake team is also developing teaching materials and learning tools that enable the public to work collaboratively with journalists, with other non-experts, and with the AI-supported assistance tools that have been developed within the project. This will familiarize the public with the mechanisms of journalistic work and thus strengthen media literacy and the culture of debate within society. "Giving responsible and engaged members of the public the opportunity to participate in fact-checking and learn how to do this correctly is, in my opinion, a great recipe for preserving the future of democracy," says Veronika Solopova, research associate in the noFake project at TU Berlin. "It's time we all understood that we are responsible for the quality of our information ecosphere. Simply accepting that 'alternative facts' and 'half-truths' are increasingly shaping the media and political rhetoric is a sure way to lose everything we hold dear."

Further information

Contact

Prof. Dr.

Dorothea Kolossa

[email protected]

+49 30 314-77019