DAC Beachcroft LLP

03/22/2023 | News release | Distributed by Public on 03/22/2023 07:12

Big Tech’s fight against disinformation: the 2022 Strengthened Code and a new Transparency Centre

Big Tech's fight against disinformation: the 2022 Strengthened Code and a new Transparency Centre

Published 22 March 2023

Last summer (June 2022), a broad range of technology companies, social media platforms, advertising agencies, and journalism organisations joined together to deliver the 2022 Strengthened Code of Practice on Disinformation("2022 Strengthened Code"). The signatories - which include the likes of Google, Twitter, Meta, Microsoft, and TikTok - have recently launched the Transparency Centre, an online hub where visitors can access data regarding actions taken and policies implemented under the 2022 Strengthened Code. The first set of reports have now been published, and give an indication as to what is being done in practice to combat disinformation and misinformation online.

This article provides a brief overview of the 2022 Strengthened Code of Practice on Disinformation, together with a selection of findings from industry reports published by the Transparency Centre. It concludes with an analysis regarding the challenges industry players and policy makers face when tackling false and misleading information online.

A BRIEF HISTORY OF THE CODE OF PRACTICE ON DISINFORMATION

In April 2018, the European Commission published an official communication, Tackling online disinformation: a European Approach(COM(2018) 236) ("Communication"), noting that large-scale disinformation and misinformation, including misleading or outright false information[i], is a major challenge. Namely, disinformation actively threatens free and fair political processes, and poses significant risks to public health systems, crisis management, the economy, and social cohesion, as well as to mental health and wellbeing. According to Commission research, the major themes of disinformation are currently those regarding elections, immigration, health, environment, and security policies: deceptive content regarding Covid-19, the Covid-19 vaccines, and the war in Ukraine have been exceedingly pervasive in particular.

In its Communication, the Commission specifically pointed to new technologies and social media as tools through which disinformation spreads with unprecedented speed and precision of targeting, thereby creating "personalised information spheres and becoming powerful echo chambers for disinformation campaigns." The Commission went on to argue that technology and digital media companies "have so far failed to act proportionately, falling short of the challenge posed by disinformation and the manipulative use of platforms' infrastructures."

In response to this criticism, in October 2018 representatives of leading tech companies, social media platforms and advertising agencies agreed to the self-regulatory 2018 Code of Practice on Disinformation("2018 Code"). The 2018 Code marked the first time in the world that industry players voluntarily agreed to a set of standards to fight disinformation, and participation significantly broadened in the years which followed. By way of follow-up to the 2018 Code, in 2021 the Commission published the new Guidance on Strengthening the Code of Practice on Disinformation (COM(2021) 262) ("2021 Guidance").

The 2021 Guidance set out the Commission's views regarding how platforms and other relevant stakeholders should improve upon the 2018 Code, in order to create a more transparent, safe and trustworthy online environment. The 2022 Strengthened Code is therefore industry's latest response to the 2021 Guidance, and contains renewed, more ambitious commitments aimed at countering online disinformation.

KEY FEATURES OF THE 2022 STRENGTHENED CODE

In the 2022 Strengthened Code, the signatories acknowledge their important role in combatting disinformation, which for the purposes of the initiative is defined to include "misinformation, disinformation, information influence operations, and foreign interference in the information space." Accordingly, the 2022 Strengthened Code contains 44 commitments, and 128 specific measures relating to such commitments, in the following thematic areas:

  • Scrutiny of ad placements: measures include demonetisation of disinformation.

  • Political advertising: measures include labelling and verification of political or issue advertising.

  • Integrity of services: measures include improving transparency obligations for AI systems, notably deepfakes and their use in manipulative practices.

  • Empowering users: measures include enhancing media literacy, improving functionalities to flag harmful false information, and implementing more transparent content review appeal mechanisms.

  • Empowering the research community: measures include disclosing data to disinformation researchers and academics.

  • Empowering the fact-checking community: measures include cooperating with, and integrating the services of, fact-checkers.

  • Permanent Taskforce and Monitoring: measures include establishing a forum to review and adapt the commitments in view of technological, societal and legislative developments.

In addition to the above, in its 2021 Guidance the Commission called upon the signatories of the 2018 Code to form a Transparency Centre, through which all the data, policies, and metrics relevant to the fight against disinformation would be published and freely accessible. The decision to create a Transparency Centre was adopted in the 2022 Strengthened Code.

SAMPLE OF TRANSPARENCY CENTRE FINDINGS

As of 20 March 2022, most of the 34 signatories have published their baseline reports, which are now available in the Transparency Centre reports archive. The below findings are quoted from the reports directly, and are intended to be an illustrative sample for general information purposes only.

Google - "Due to the invasion of Ukraine, Google Advertising adapted and enforced wide ranging actions, such as pausing Google Ads in Russia as well as prohibiting the monetisation of any Russian-Federation state funded media."

Meta (Facebook) - "In Q3 2022, Meta took action against 1.5 billion fake accounts on Facebook globally, which [Facebook] estimate represented approximately 5% of [their] worldwide monthly active users."

TikTok - If a user searches for a hashtag on topics such as Covid-19 Vaccine, Holocaust Denial or War in Ukraine, "the user will be displayed a public service announcement that will remind users about [TikTok's] Community Guidelines and present them with links to trusted resource pages."

Twitter - "Under Twitter's new ownership, [Twitter] launched the subscription service, Twitter Blue, which is designed, in part, to authenticate user identities and thereby reduce the prevalence of spam and viral disinformation."

Adobe - "In June 2022, Adobe released a suite of open-source developer tools based on the Coalition for Content Provenance and Authenticity (C2PA) specification - for free. This is helping to get provenance tools into the hands of millions of creators and developers to create a safer, more transparent digital ecosystem, while providing users with information to be better informed about the content they see online."

NewsGuard - "NewsGuard participated in numerous media literacy events with students, journalists, librarians, teachers and citizens […]. [NewsGuard] editors have conducted pro bono media literacy seminars in secondary schools, universities, journalism schools, and professional associations."

Microsoft (Bing) - "Microsoft and key members of the Bing Search team are also involved in the Partnership on AI ("PAI") to identify possible countermeasures against deepfakes and has participated in the drafting and refinement of PAI's proposed Synthetic Media Code of Conduct. The proposed Code of Conduct provides guidelines for the ethical and responsible development, creation, and sharing of synthetic media (such as AI-generated artwork)."

CONCLUSION

One of the principal challenges when tackling disinformation is that the removal or moderation of harmful content must also take into consideration other important rights and social values, to include the rights of freedom of expression and freedom of information. As stressed in the initial Communication, fundamental rights must be "fully respected in all actions taken" to fight disinformation. The signatories have therefore acknowledged the delicate balance that must be struck between protecting fundamental rights on the one hand, and taking effective action to limit the spread and impact of otherwise lawful content on the other. That said, such a task is likely to be very difficult indeed, not least because social mores and national legal systems differ when it comes to how to treat "awful but lawful" content.

It is also important to stress that the 2022 Strengthened Code is an instrument of self-regulation, and signatories have the final say as to which commitments they sign up to. Several signatories have decided to adopt only a select few commitments and measures, and thus the impact of the 2022's Strengthened Code may not be as robust as perhaps the Commission would like. Although the Commission has stated that "as a whole, the Code fulfils the expectations" set out in its 2021 Guidance, the 2022 Strengthened Code is neither officially endorsed nor enforced by the Commission. It remains to be seen how effective the 2022 Strengthened Code will be, and to what extent new (or enhanced) legislation will be needed.

These challenges notwithstanding, the Transparency Centre reports will likely prove to be a welcome step forward in the fight against disinformation online. The report template obliges signatories to provide detailed explanations regarding their proactive steps taken against false and manipulative content, as well as to explain best practices and plans for the future. In sharing such information openly, academics and policymakers can better understand which policies and procedures have desirable outcomes, which will undoubtedly inform future legislation in this area. Organisations will also have a powerful opportunity to learn from each other, and collaborate in the digital ecosystem against bad actors.

Further information

DAC Beachcroft's technology and media lawyers advise clients ranging from scaling businesses to government bodies on digital media, internet governance, and other technology regulatory issues. The author of this article, Kelsey Farish (Associate), is one of Europe's leading legal experts on synthetic media. She is an advisory board member of the VERification Assisted by Artificial Intelligence (vera.ai) project, funded by EU Horizon Europe and Innovate UK.

Footnote

[i]Both terms, 'misinformation' and 'disinformation', are used to describe false or inaccurate information, especially that which is deliberately intended to deceive. Broadly speaking however, 'disinformation' speaks to that information which is purposefully spread by state actors or governments for political purposes. Throughout Commission documentation and the 2022 Strengthened Code itself, the terms 'misinformation', 'disinformation', and similar are collectively referred to as 'disinformation'.