Covington & Burling LLP

01/26/2023 | News release | Distributed by Public on 01/26/2023 18:48

Brazil’s Senate Committee Publishes AI Report and Draft AI Law

On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on regulatory approaches to artificial intelligence ("AI"), as well as a draft law (see pages 15-58) ("Draft AI Law") to serve as the starting point for the Senate's deliberations on new AI legislation. When preparing the 900+ page report and Draft AI Law, the committee drew inspiration from earlier proposals and its own research into how OECD countries are regulating (or planning to regulate) AI, as well as inputs received during a public hearing and written comments from stakeholders. This blog posts highlights 13 key aspects of the Draft AI Law.

(1) Foundational Principles

The Draft AI Law says that the development, implementation and use of AI in Brazil must adhere to the principle of good faith, as well as (among others): self-determination and freedom of choice; transparency, explainability, intelligibility, traceability, and auditability (to avoid risks of both intentional and unintentional uses); human participation in (and supervision of) the "AI life cycle"; non-discrimination; justice, equity, and inclusion; legal process, contestability, and compensatory damages; reliability and robustness of AI and information security; and proportionality/efficacy when using AI to achieve these objectives.

(2) Definition of an "AI System"

The Draft AI Law defines an "AI system" as a computational system with varying degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning, logic, and knowledge representation, based on data inputs received from machines or humans, with the goal of producing predictions, recommendations, or decisions that can influence the virtual or real environment. This definition appears to align, at least in part, with the OECD's definition of the same term, which other regimes have also drawn inspiration from when formulating AI legislative proposals.

(3) Risk Assessment

Providers and users of AI systems must conduct and document a risk assessment prior to placing any AI system on the market.

(4) High-Risk AI Systems

The Draft AI Law offers an enumerated list of "high-risk" AI systems, which include AI systems used in the following contexts (among others): securing the operation of critical infrastructure; education and vocational training; recruiting; credit scoring; use of autonomous vehicles (if such use could cause bodily harm to natural persons); and biometric identification. Notably, the Draft AI Law also classifies health applications (e.g., medical devices) as high-risk AI systems. The to-be-determined competent authority (see "Enforcement" below) is responsible for periodically updating the list in accordance with a number of criteria set out in the Draft AI Law.

(5) Public Database of High-Risk AI Systems

The competent authority is also tasked with creating and maintaining a publicly accessible database of high-risk AI systems, which will contain (among other information) the completed risk assessments of providers and users of such systems. Such uploaded assessments will be protected under applicable intellectual property and trade secret laws.

(6) Prohibited AI Systems

Brazil's Draft AI Law imposes a prohibition on AI systems that (i) deploy subliminal techniques, or (ii) exploit the vulnerabilities of specific groups of natural persons, whenever such techniques or exploitation is intended or has the effect of being harmful to the health or safety of the user. Similarly, Brazil's Draft AI Law also prohibits public authorities from conducting social scoring and the use of biometric identification systems in publicly accessible spaces, unless there is a specific law or court order that expressly authorizes the use of such systems (e.g., for the prosecution of crimes).

(7) Rights of Individuals

The Draft AI Law grants persons affected by AI systems the following rights vis-à-vis "providers" and "users" of AI systems, regardless of the risk-classification of the AI system:

  • Right to information about their interactions with an AI system prior to using it - in particular, by making available information that discloses (among other things): the use of AI, including a description of its role, any human involvement, and the decision(s)/ recommendation(s)/ prediction(s) it is used for (and resulting consequences); identity of the provider of the AI system and governance measures adopted; categories of personal data used; and measures implemented to ensure security, non-discrimination, and reliability;
  • Right to an explanation about a decision, recommendation, or prediction made by an AI system within 15 days of the request - in particular, information about the criteria and procedures used, and the main factors affecting the particular forecast or decision (e.g., rationale and logic of the system, how much it affected the decision made, and so forth);
  • Right to challenge decisions or predictions of AI systems that produce legal effects or significantly impact the interests of the affected party;
  • Right to human intervention in decisions made solely by AI systems, taking into account the context and the state of the art of technological development;
  • Right to non-discrimination and the correction of discriminatory bias, particularly where it results from the use of sensitive personal data leading to (a) a disproportionate impact arising from protected personal characteristics, or (b) disadvantages/ vulnerabilities for people belonging to a specific group, even when apparently neutral criteria are used; and
  • Right to privacy and the protection of personal data, in accordance with the Brazilian General Data Protection Law ("LGPD").

(8) Governance Measures and Codes of Conduct

Providers and users of all AI systems must establish governance structures, and internal processes capable of ensuring security of such systems and facilitating the rights of affected individuals, including (among others) testing and privacy-by-design measures.

Providers and users of "high-risk" AI systems must implement additional measures, such as: conducting an algorithmic impact assessment that must be made publicly available, which may need to be periodically repeated; designating a team to ensure the AI system is informed by diverse viewpoints; and implementing technical measures to assist with explainability.

Further, providers and users of AI systems may also draw up codes of conduct and governance to support the practical implementation of the Draft AI Law's requirements.

(9) Serious Security Incidents

Providers and users of AI systems must notify the competent authority of the occurrence of serious security incidents, including where there is risk to human life or the physical integrity of people, interruption of critical infrastructure operations, serious damage to property or the environment, as well as any other serious violations of fundamental human rights.

(10) Civil liability

Providers and users of AI systems are responsible for the damage(s) caused by the AI system, regardless of the degree of autonomy of the system. Further, providers and users of "high-risk" AI system are strictly liable to the extent of their participation in the damage, as their fault in causing the damage would be presumed.

(11) Copyright

The automated use of existing works - such as their extraction, reproduction, storage and transformation in data and text-mining processes - by AI systems for activities carried out by research organizations and institutions, journalists, museums, archives and libraries, will not necessarily constitute a copyright infringement under certain scenarios listed in the Draft AI Law.

(12) Sandboxes

The Draft AI Law provides that the competent authority may regulate testing environments to support the development of innovative AI systems.

(13) Enforcement

The Brazilian Government must designate a competent authority to oversee the implementation and enforcement of the Draft AI Law. Depending on the violation, administrative fines may be imposed of up to 50 million Reais (approximately 9 million Euros) or 2% of a company's annual turnover.

Next steps

The Brazilian Senate will use the Draft AI Law as a basis for drafting and approving a bill, which will then be discussed in the Chamber of Deputies.

* * *

Covington regularly advises the world's top technology companies on their most challenging regulatory and compliance issues in the U.S., Europe, and other major markets. If you have questions about the regulation of Artificial Intelligence, or other tech regulatory matters, please do not hesitate to contact us.