The National Academies

04/29/2024 | News release | Distributed by Public on 04/29/2024 11:22

Brainstorming Solutions to Disinformation

Share

Brainstorming Solutions to Disinformation

Feature Story| April 29, 2024

By Sara Frueh

Students, researchers, organizations, and companies are developing innovative ways to counter disinformation on social media, either by reducing its reach or by working to help users become more discerning consumers of information.

A number of these new approaches were explored at a National Academies workshop earlier this month. Prior to the event, the workshop planning committee put out a call for solutions from academia, industry, journalism, civil society, policy, and government. More than 100 ideas and initiatives were submitted, and the committee selected 14 for presentation and discussion at the workshop.

"This is intended to be a very collaborative experiment and a generative workshop," said committee co-chair Saul Perlmutter in welcoming participants. "The goal is to get many ideas and approaches on the table - ideally ones that might be new to some."

Participants and audience members also submitted ideas, reactions, and resources for posting on a virtual whiteboard. The workshop and its products are meant to catalyze further development of solutions to disinformation. "Ideally the notes on the whiteboard and the subsequent workshop report will be useful to others who might be intrigued enough by an idea to develop it further, or suggest a new collaboration, or a new research area," Perlmutter said.

New approaches to content moderation

One panel explored new ways to counter disinformation through content moderation. For example, Nicholas Brigham Adams of Goodly Labs described Public Editor, a system slated for launch in the U.S. this fall that's designed to alert readers of popular online content to reasoning errors, cognitive biases, and rhetorical manipulations. The content is labeled by trained citizen-scientist volunteers; each annotation is the product of independent reviews by at least three individuals, whose assessments follow stepwise protocols and are monitored for bias. The system's methods, including an ontology of reasoning errors, are open to the public and researchers.

"This is targeting 'mis-reasoned' ideas - not people, not journalists, not even particular news sites," said Adams. "We're giving [readers] this modest, targeted, appropriately granular doubt about some particular ideas - instead of doing something like putting a big red X on something or simply taking it down."

John Wihbey of Northeastern University introduced his research on the possibilities and risks of using AI-powered bots called chatmods or modbots to address disinformation campaigns - a solution that he thinks many social media platforms are ultimately likely to pursue. Chatbots could be used to provide assistance, mediation, warnings, or counterspeech to users, said Wihbey. "The question is, do we want them doing it?" he said. "I think we need to get ahead of this."

He pointed to the need to develop ethical frameworks and think about how principles like beneficence, justice, and explicability would be applied in these contexts. "I'd love for others to join in, in thinking through this problem," said Wihbey. "Maybe we could make a difference in terms of making sure that companies, if they go in this direction in a big way, proceed ethically."

Nicole Cooke from the School of Information Science at the University of South Carolina offered reflections on common themes across these and other proposed approaches, noting that "perhaps most important is the idea of including humans in all of these interventions and all of these potential solutions." She emphasized the need to get a wider range of people involved in public and community efforts, especially those from the non-science and non-academic communities.

Watch the Complete Discussion

Improving education

Also explored were approaches to educating children and adults to be better able to assess information and less vulnerable to disinformation. For example, Jonathan Osborne from Stanford University described a set of lesson plans he and a colleague have developed to teach media literacy and "epistemic vigilance" to middle schoolers - "giving them the tools to make what you might call good choices about what to trust and not to trust," he said. They plan to start by working with about 50 science teachers and up to 1,500 students to test the approach's efficacy.

"Media literacy is going to need to be extended to include generative AI literacy," said Matthew Groh from Northwestern University, who spoke about an initiative to help people better understand how generative AI works and what it can and cannot do. "If they can understand both the capabilities and the limitations, they're going to be a lot better at disambiguating what is likely to have been generated by generative AI and what is likely to have been authentically recorded."

Another project has started collaborating with social media platforms to "pre-bunk" disinformation - refuting disinformation in advance using games, text, and videos. "The goal is to empower citizens to discern some of the techniques of disinformation and misinformation without restricting their ability to form specific opinions on issues," said Sander van der Linden from Cambridge University.

"It's so difficult to correct misinformation once it's taken root in people's minds, that we focus almost entirely on the importance of preventing this from happening in the first place," said van der Linden.

Watch the Complete Discussion

Technological solutions

Michelle Amazeen of Boston University said that major news websites contribute to the disinformation problem by featuring "native advertising" or "sponsored content" whose appearance is similar to journalistic content. Research shows that most readers are deceived and don't recognize a difference between this hidden commercial content and genuine journalistic articles, she said. Moreover, when sponsored content is shared on social media, FTC-mandated disclosures about it being commercial content often disappear, and it can appear in Google search results because the search algorithm may not recognize it as paid content.

"I was shocked to find my students citing native advertising in their research papers as a credible source," said Amazeen. "They thought they were citing a Wall Street Journal article." Amazeen is part of a team at BU that is developing a protocol for identifying and systematically extracting climate-related native advertising from online news outlets to create a "native advertising observatory," which will be publicly available. The team is also testing various interventions to see if they can reduce consumer confusion.

Approaches are also being developed to help consumers verify that content comes from a trusted source and hasn't been altered with the intent to deceive - the digital equivalent of a wax seal on the back of an envelope, explained Eric Horvitz of Microsoft. A variety of methods exist or are in development, including cryptographic methods and a type of secret watermark called fingerprinting, though Horvitz cautioned that "there are no magic bullets."

He noted the work of the three-year-old Coalition for Content Provenance and Authenticity (C2PA), which supports an ecosystem of about 2,000 organizations spanning camera manufacturers, content producers, major tech companies, and NGOs that are working to develop and implement such solutions. "The idea is to get wide acceptance and continuing collaboration and standards in place to make this actually work."

Watch the Complete Discussion

Regulatory possibilities

A final panel discussed the pros and cons of government regulation as a way to tackle disinformation. Nathalie Smuha from the Department of International and European Law at KU Leuven offered an overview of the European Union's recently passed Digital Services Act. The regulation imposes transparency obligations on platforms, requiring them to provide more information about the content moderation decisions they make, the kind of algorithmic systems they use for it, and how these systems are trained, she explained. In addition, the legislation requires very large platforms to undertake risk assessments of their harms to society, especially harms to democracy and human rights, and to put mitigation measures in place.

Moderator Aziz Huq of the University of Chicago Law School asked panelists to consider the role of government in the U.S. context: If there were one provision that it would be possible to enact in Congress in the next month related to disinformation, what would be your top priority for regulatory change?

Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy, replied that he would support much more government funding for media literacy education, civics education in schools, and libraries. But he said that the First Amendment, with some narrowly defined exceptions, protects speech - even a great deal of, though not all, false speech - and that it would be dangerous to allow more government regulation of it. "Once you get rid of those protections, you're not going back," he said.

Josh Braun from the University of Massachusetts at Amherst, in contrast, emphasized the need for systemic solutions and pointed to the limits of media literacy education - likening it to addressing the problem of traffic fatalities simply by telling individuals to drive more safely, rather than also making safer cars or roadways. He noted the need to explore "how to fix some of the structural problems that flood the public square with bile, rather than insisting that citizens be better at swimming in it."

Smuha pointed out that there are regulatory steps that could be taken to protect people that do not affect free speech - personal data protection, for example, and transparency measures. "Inspiration from other countries can help with that."

The ideas and discussions will be captured in a workshop summary scheduled to be released later this year. Watch the complete discussion and all videos from the workshop.

Recent News

Space Environmentalism: Toward a Circular Economy Approach for Orbital Space

The Challenge of Predicting Climate Migration

Celebrating Earth Day with the National Academies

Rohr Named U.S. Winner of Frontiers Planet Prize

  1. Load More...