11/30/2023 | News release | Distributed by Public on 11/29/2023 22:11
This month's Digital Matters-our monthly round-up of news, research, events, and notable uses of tech-navigates a variety of government and international approaches to AI regulation, exploring how the many international fora and national initiatives center (or don't center) the wellbeing of technology users. There's a lot to cover: on the heels of the Biden administration's Executive Order on AI, the United Kingdom hosted the "first ever" global AI Safety Summit, while the G7 released a set of guiding principles for AI governance as part of the multilateral Hiroshima Process. Over at the UN's New York headquarters, the High-Level Advisory Body on AI began a series of multi-stakeholder convenings aimed at fostering a globally-inclusive approach to international governance of AI. Meanwhile, members of Congress continue to introduce new legislation to inject transparency and accountability into "high-impact" AI systems. Complicating matters, the is-he-running-it or running-from-it saga of Sam Altman and OpenAI makes it even murkier to understand the tech for humanity space (at the time of publishing this piece, it appears he will be running-it with a new board of directors).
The pieces featured in the November Digital Matters take on key questions, such as how AI can be leveraged to strengthen democracy and improve the relationship between government and citizens, how conversations about tech governance can be more inclusive, and how civil society and governments can push developers to prioritize human-centered design and built-in protections for tech's most vulnerable and at-risk users. We also take a look at two DPI umbrella structures: the global Digital Public Infrastructure (DPI) repository, an initiative of the Indian G20 Presidency, and the "50-in-5" digitization program led by the UNDP that will feature a strong focus on digital ID. Both initiatives merit further examination into how DPI can serve as a tool towards inclusive development.
As efforts to regulate AI technology accelerate, the public and private sector have been grappling with understanding the roles and responsibilities of various stakeholders. While technological advances often move faster than policy, we are hopeful that new momentum on AI governance can help build public-private partnerships that take a proactive approach to risks posed by these new technologies, and incentivise safety by design to protect end-users.
Public Interest Technologists React to Executive Order on AI by Public Interest Technology (PIT) at New America (Nov 16, 2023)
The Executive Order on AI contains a great deal of homework for federal agencies and civil society. In a new blog post from the PIT team, four leading public interest technologists react to the order. Afua Bruce of AnB Advisors commends the order's focus on addressing cybersecurity risks posed by AI, and urges federal agencies to use AI "to protect critical federal systems." Charlton McIlwain of New York University notes the order is "strikingly clear and straightforward" in its emphasis on using AI to protect civil rights and advance equity. Beth Simone Noveck of Northeastern University argues the order does little to address how AI can be used to streamline and simplify complex government processes, which can make it "easier for governments to listen to their citizens." While the executive order is a major step forward as the U.S. seeks to catch up to the pace of innovation, it is important to emphasize that it is only a starting point and more specific goals will need to be identified to ensure the order fulfills its promise to put citizens first.
AI Safety Summit, hosted by the United Kingdom (Nov 1-2, 2023)
During the first week of November, the United Kingdom hosted the "first global" AI Safety Summit, which brought together representatives from countries leading on AI development, industry experts, and civil society researchers and advocates for a two-day discussion on the safe and responsible development of AI. Key outcomes from the summit included the Bletchley Declaration on AI safety, which emphasized the need for AI systems to prioritize human-centric design, protect human rights and fundamental freedoms, and help bridge the digital divide. The UK summit is only the starting point for building global consensus on what safe AI development looks like: future summits hosted by South Korea and France will be held over the next year.
International Guiding Principles for Advanced AI system, G7 Hiroshima Process (Oct 30, 2023)
On the same day as the Biden administration released its Executive Order on AI, the Group of Seven (G7) leaders released their Guiding Principles and Code of Conduct on Artificial Intelligence, as part of the multilateral Hiroshima Process inaugurated in May. The principles, which aim to build international standards and best practices for the design, development, deployment and use of advanced AI systems, also include a voluntary Code of Conduct for AI developers. We hope that multinational convenings like the UK and G7 summits can promote the development of global guardrails that cut across national borders to ensure human rights and international norms are protected in the face of rapid AI development.
U.S. 2023 APEC Outcomes, State Department (Nov 17, 2023)
Digital governance efforts this month also went beyond AI. Earlier this month, the U.S. hosted the latest convening of APEC Economic Leaders in San Francisco; high on the agenda for this year's meeting was building a "Digital Pacific" through the advancement of digital skills and connectivity. One of the key outcomes of the summit was the Digital Pacific Agenda, which commits the U.S. to working with APEC economies to shape the sustainable development of emerging digital technologies. APEC member states also endorsed a set of principles for facilitating access to Open Government Data, an initiative that will promote interoperability and access to public sector data. According to the principles, "when governments choose to make data available to the public, these datasets can enable innovation, foster government transparency and efficiency, and enable citizens to be more informed."
As new approaches to AI regulation continue to evolve and take shape at national and international levels, Gordon LaForge and Patricia Gruver-Barr remind us that in order for tech governance to be equitable, these conversations must be inclusive, and expand beyond the usual voices and countries. Fei-Fei Li argues that the ultimate driver and ethical underpinning of technological innovation should be improving the human condition, while Arati Prabhakar discusses how the Biden Executive Order on AI seeks to re-center societal risks in its governance approach. As the race to govern AI and other cutting-edge digital technologies only accelerates, these voices remind us to slow down and consider who we may be leaving out at every step of the way.
The Future of AI Governance: A Conversation with Arati Prabhakar, Carnegie Endowment (Nov 14, 2023)
Arati Prabhakar, the Director of the White House Office of Science and Technology Policy discussed the Biden administration's approach at a recent Carnegie Endowment event on AI governance. In her remarks, Prabhakar noted that the recently-released Executive Order seeks to strike a balance between innovation and regulation in order to manage risks while seizing the benefits to be gained from AI. Prabhakar also emphasized that too often, we tend to focus on technological capabilities, rather than the very human choices that decide what to automate, what to connect, and what data goes into algorithms. We agree that increased transparency and accountability during the development process can help ensure AI cannot be used to "magnify discrimination at scale," as Prabhakar put it. [Listen to Prabhakar's 11/14 remarks at Carnegie, here].
Minding the AI Power Gap: The Urgency of Equality for Global Governance by Gordon LaForge and Patricia Gruver-Barr, Tech Policy Press (Nov 17, 2023)
Gordon LaForge and Patricia Gruver-Barr, of New America's Planetary Politics Initiative strike a positive tone on the momentum created by new governance initiatives on AI, which they note, represent a "refreshing course correction" from previous efforts to set common standards, practices, and regulations around new digital technologies. However, they warn that high-level discussions on AI safety must expand beyond like-minded countries and take a whole-of-society approach to AI risks. Such risks include how these systems may increase global inequality, reinforce systemic injustice, and continue to sideline marginalized populations. There is much that leaders on AI governance can and must do to ensure these systems can promote economic social mobility and inclusive growth rather than widen the digital divide. [Read LaForge and Patricia Gruver-Barr's full report, "Governing the Digital Future," here.]
New Book:The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI by Fei-Fei Li (Nov, 2023)
In "The Worlds I See," computer science heavyweight Fei-Fei Li chronicles her journey as an immigrant in America to becoming one of the leading voices on AI research and governance. Li, who is the co-director of Stanford's Human-Centered AI Institute, has been a longtime advocate for a human-first approach to technological development that seeks to enhance, rather than replace, human capabilities. "We should put humans in the center of the development, as well as the deployment applications and governance of AI," she writes. We couldn't agree more.
Alongside governmental and international efforts to promote AI safety and security, calls for tech companies to rein in powerful products have only gotten louder. New reports and analysis by civil society actors uncover several disturbing examples of how tech can fall short in protecting its most vulnerable users. Technology on its own isn't ethical, equitable or inclusive - we believe it is up to policymakers and industry leaders to take the initiative to implement guardrails that protect the rights of all users.
Collaboration between governments can help ensure that digital transformation occurs in an equitable and inclusive manner. This month, the UNDP announced new momentum around digital public infrastructure as 11 "first-mover" countries committed to design, implement, and scale at least one DPI component by 2028, as part of the UN 50-in-5 campaign. The campaign hopes to "radically shorten" country-level DPI implementation by sharing knowledge, best practices, and digital public goods between countries. We're excited to see how collaborative efforts between countries can accelerate the adoption of DPI across country income levels, geography, and different places in the digital development journey.
Global Digital Public Infrastructure Repository (GDPIR) released, initiative of the Indian G20 Presidency (Nov, 2023)
In a leading example of code-led diplomacy and as a follow up to the G20 Presidency, India's Ministry of Electronics and Information Technology (MeitY) created a Global Digital Public Infrastructure Repository - a collection of code created by governments, made freely available to other nations. The GDPIR is designed to be a resource for key lessons and knowledge available from G20 members and guest countries, enabling easy discoverability. It is aimed at addressing the existing knowledge gap around the right practices to design, build, and deploy population scale DPI. Each contributing participant can choose to display any information at their discretion, which can help others to develop DPI. The repository is live, and contributors include: Argentina, Australia, Bangladesh, Brazil, European Union, France, Germany, India, Italy, Japan, Mauritius, Nigeria, Oman, Republic of Korea, Russia, and Singapore.
Data Brokers and the Sale of Data on U.S. Military Personnel, Duke University (Nov, 2023)
In a new study released this month, researchers at the Duke University Sanford School of Public Policy take aim at the multi-billion-dollar data brokerage industry, which comprises companies that profit off of the gathering, aggregating, and selling of data on Americans. The study's authors uncovered the industry's willingness to sell private data on current and former U.S. military personnel on the cheap with minimal vetting. The data obtained by researchers included sensitive details such as individuals' names, addresses, family members, and health statuses, raising not only privacy issues, but also critical national security risks. The study adds to growing calls for government action to manage the data brokerage ecosystem. Currently, no comprehensive federal consumer privacy law exists in the U.S., but that all could change as studies like these continue to expose gaps in the responsible and ethical sharing and use of data.
The Intersection of Federal Privacy Legislation & AI Governance, Event Recording, Open Technology Institute at New America (Nov 15, 2023)
Continuing the conversation on data privacy, earlier this month experts on privacy and AI came together virtually to discuss how the implementation of federal privacy rules would address harms that stem from the misuse of data that power AI systems. In her opening remarks, keynote speaker Rep. Cathy McMorris Rodgers (R-WA), Chair of the House Energy and Commerce Committee, argued for a national data privacy and security standard to safeguard American's information. Panelists also called for more transparency from big tech on how their algorithms take in, analyze, and use personal data and make predictions. We believe more needs to be done by government actors to create safeguards and promote trusted practices for how tech companies handle personal data.
Regulators, Industry Ponder How to Integrate Online Safety Laws, Tech Policy Press (Nov 17, 2023)
This month, the Family Institute for Online Safety hosted its annual conference amid new global action and legislation on digital rights. This year's theme, "New Frontiers in Online Safety," covered topics at the intersection of online safety and parenting, such as content moderation and privacy. Though the U.S. does not currently have comprehensive online safety legislation, the introduction of the UK's Online Safety Act last month and new stipulations to the EU's Digital Services Act will affect how tech companies and platforms ensure compliance across jurisdictions. Aligning global standards around online safety is critical not only for the tech industry, but also for the consistent protection of tech users, parents and kids alike.
Carnegie India is hosting the Global Technology Summit and will address the momentum surrounding digital public infrastructure. The Summit will also explore use cases of AI, the evolving regulatory landscape, and issues such as skilling and innovation. The Global Technology Summit brings together industry experts, policymakers, scientists, and other stakeholders to deliberate on the changing nature of technology and geopolitics. This will be a hybrid event.
Data Governance in the Age of Generative AI, December 7-8
GWU's Digital Trade and Data Governance Hub and the NIST-NSF Trustworthy AI Institute, along with several partners, are hosting a two-day conference (December 7-8) in Washington DC to discuss data governance and AI. The global popularity and use of large language models for generative AI have revealed enforcement problems as well as gaps in the governance of data at the national and international levels. This will be a hybrid event.
Please consider sharing this post. If you have ideas or links you think we should know about, you can reach us at [email protected] or @DIGI_NewAmerica.