12/13/2023 | News release | Distributed by Public on 12/13/2023 11:54
On December 8, 2023, EU policymakers reached an agreement on the Artificial Intelligence Act (AI Act). As a standard-bearer for global digital and data governance, the EU has been setting regulatory benchmarks on emerging issues ranging from data privacy to targeted advertising practices. After a marathon legislative process that began in April 2021, the EU AI Act will become the world's first comprehensive rule that regulates artificial intelligence. Companies worldwide that use, develop, and distribute AI will soon feel the "Brussels Effect" that raises AI governance standards across the global economy. While the formal text is still under final revision, senior EU officials anticipate that the historic AI Act will be adopted in April 2024, with certain requirements taking effect in 2026, two years following passage.
The EU's AI Act is a landmark legal framework first introduced by the EU Commission in April 2021 (see the 2021 Commissionerversionhere). Its objectives are to ensure the safety, accountability, and transparency of AI systems in the EU market. Taking a risk-based approach, the EU AI Act will oversee AI system developers, distributors, importers, and users based on AI systems' adverse impact on individuals and society: with greater AI potential harm comes stronger oversight. The sense of déjà vu is palpable as the AI Act will shape the global AI legislative framework, just as EU privacy regulations like General Data Protection Regulations (GDPR) have done since 2016.
The AI Act provides specific definitions and responsibilities for different players in the AI ecosystem that develop, distribute, use, or supply AI systems for the EU market. In some cases, even if an entity is based outside the EU, the AI Act will still apply where "the output produced by the [AI] system is used in the [European] Union." For example, if an entity based in South America develops an AI system that produces outputs used to assess EU residents for loan or job applications, the Act may apply. As a result, the long-arm reach of the AI Act will likely hold accountable all parties across the distribution chain. AI systems used for military or national security will be exempt while EU lawmakers set strict conditions around the use of remote biometric identification systems (RBI) by law enforcement.
The draft AI Act adopted by the European Parliament in June 2023 (see the 2023 Parliament versionhere) has brought the following two key players into the limelight:
The draft 2023 AI Act places compliance obligations primarily on the AI Provider, just as how the GDPR holds data controllers to greater accountability for adhering to data privacy principles. In reality, however, AI users (Deployers) will vastly outnumber those who develop and provide AI systems (Providers). In the case of high-risk AI systems described below, the AI Act requires the Deployer to play a critical role in mitigating the risks of AI use in the EU market.
Taking a "risk-based approach," the AI Act seeks to regulate AI systems based on whether they could potentially jeopardize end-users' rights and safety under the following four risk categories:
1. Unacceptable AI: Recognizing their harms to EU democracy and human rights, the AI Act outright bans the use of AI in the EU market replicating the dystopian world of "The Minority Report":
2. High-Risk AI: The AI Act determines that certain AI systems pose material potential harm to critical infrastructure, employment, environment, credit scoring, election, border control, health, and the rule of law. Developers of a "high-risk" AI system must meet certain requirements, including passing a conformity assessment before its release to the public, registration in an EU database, data governance (including validation, testing, and training), cybersecurity assessment, and affording users opportunities for appeal and redress.
3. Low-Risk AI: AI systems posing only minor risks, such as chatbots, are subject to transparency responsibilities. These low-risk AI must provide a disclosure to consumers that content is generated by AI.
4. Minimal or No Risk AI: Minimal-risk AI includes automated applications, such as AI-powered summarizers, e-discovery tools, and spam filters, among others. Most AI systems used in our daily operations fall under this category.
Penalties for violating the AI Act have substantially increased ranging from €35 million or seven percent of global turnover to €7.5 million or 1.5 percent of global turnover according to this Press Release. While the final text is pending, the 2023 Parliament version included the following tier-based table for assessing fines:
At its core, effective AI governance for corporations requires a comprehensive approach, including internal controls and supply-chain management. AI developers and users of high-risk AI in financial services, employment, critical infrastructure, medical devices, health care, and others need to conduct a close assessment of their existing operations and ask themselves the following questions:
The EU's agreement on the AI Act last Friday could have a domino effect on global legislative efforts to tackle AI risks. Together with the recent release of the Biden administration's AI Executive Order on October 30, 2023 (more details available at this Client Alert on Biden's AI Executive Order: What Private Sector Need to Consider), the new deal on the EU AI Act signals a paradigm shift on how businesses can leverage AI.
For more information or assistance on this topic, please contact Vivien Peaden, CIPP/US, CIPP/E, CIPM, PLS, or a member of Baker Donelson's AI Team.