Dentons US LLP

04/16/2024 | News release | Distributed by Public on 04/17/2024 04:35

European Union's Artificial Intelligence Act – from Vietnam's perspective

April 16, 2024

On March 13, 2024, The European Parliament granted approval to the Artificial Intelligence Act ("AI Act" or "Act"), marking a significant milestone in the area of Artificial Intelligence ("AI"). This law is an innovative attempt to provide a legal structure for the utilization of AI within the European Union ("EU") and place an emphasis on responsible development and execution, giving particular attention to trustworthiness, safety, and ethical factors in the field of creating and/or using AI. While the legislation must undergo a few more steps before it is enacted, it is expected to come into force in April or May this year and, following a 24-month grace period will be fully in force by around June 2026.

With the EU setting a global example for AI regulations, can Vietnam adapt these approaches to incorporate our own effective AI regulation as AI development is booming?

The AI Act EU provides the definition of AI systems, which means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

In addition, the AI Act also establishes the risk-based approach and classifies AI systems into four levels of risk, including unacceptable risk, high risk, limited risk and minimal risk:

AI systems deemed to pose an unacceptable risk are prohibited under the AI Act. These include AI applications that infringe upon people's fundamental rights, such as biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases. The AI Act also forbids emotion recognition in the workplace and education institutions; social scoring, predictive policing based solely on the profiling or assessing their personality traits and characteristics; and AI that manipulates human behavior or exploits people's vulnerabilities.

It is worth noting that, despite being listed as an "Unacceptable risk", the remote biometric identification system can be used by law enforcement to search for missing people, identify suspects in serious crimes, or prevent terrorism with the requirement of having judges approval beforehand and restrict it to serious crimes on a specific list.

For AI systems that pose a high risk, potentially creating an adverse impact on people's health, safety or fundamental rights, strict obligations will apply. This includes rules on mandatory Fundamental Rights Impact Assessments, Conformity Assessments, data governance requirements, registration in an EU database, quality and risks management systems, transparency, human oversight, accuracy, robustness and cybersecurity. Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back.

The AI Act allows the free use of minimal-risk AI. Systems presenting minimal risk for people (e.g. spam filters) will not be subject to further obligations beyond currently applicable legislation.

In doing the risk-based approach for AI systems, the AI Act EU provides penalties and fines, employing its own classification of AI systems and their associated risk levels.

The most substantial fine is given for using systems prohibited under the AI Act due to the unacceptable level of risk that they pose. These instances are subject to fines of up to €35,000,000 or up to 7% of annual worldwide turnover for companies.

The subsequent tier of fines is set forth for non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users, which is subject to fines of up to €15,000,000 or up to 3% of annual worldwide turnover for companies.‍

Another potential fine involves an entity supplying incorrect, incomplete, or misleading information to authorities that is subject to fines up to €7,500,000 or 1% of the total worldwide turnover for companies.

The EU's AI Act is considered a groundbreaking regulation that aims to comprehensively govern artificial intelligence. However, it has raised concerns among certain politicians and the tech industry in Europe regarding its potential negative impact on the bloc's competitiveness and the possibility of triggering a withdrawal of investments. Additionally, the imposition of "strict obligations" on developers of the advanced technologies that underpin many downstream systems is therefore likely to hinder innovation in Europe and lead to a "brain drain" in this field.

Despite facing many mixed opinions, the world's first comprehensive law on artificial intelligence is approaching implementation. The enormous influence of AI on the global economy and society cannot be denied, making the legalization of AI urgent.

In that situation, when Vietnam, like other countries in the world, does not have a specific legal framework to regulate and manage artificial intelligence, Vietnamese legal experts have closely monitored the implementation of the AI Act and its potential future impacts to evaluate and provide an appropriate approach to Vietnam's artificial intelligence development goals. The AI Act might inspire Vietnam to adopt a similar classification system, encouraging AI research, development, and entrepreneurship while ensuring responsible AI development that prioritizes ethical considerations alongside economic benefits.

In addition, as the AI Act comes into effect partially and will take full effect in 2026, Vietnamese enterprises have been, are, and will be engaged in the EU market through AI applications, whether by developing AI models or marketing AI systems to EU corporations, will be required to deal with the regulatory framework of the EU's forthcoming AI Act. Enterprise needs to proactively monitor and seek legal advice from law-practicing organizations and devise operational plans for their businesses. By proactively adapting to the requirements of the AI Act EU, Vietnamese enterprises can ensure they remain competitive in the European market while fostering responsible and trustworthy AI development practices.

In conclusion, the AI Act EU represents a pivotal step toward shaping responsible AI governance globally. By addressing risk assessment, ethical considerations, and innovation, the AI Act provides a comprehensive framework for AI regulation. As Europe leads the way, other jurisdictions, including Vietnam, can draw valuable insights from this landmark legislation.