EU Artificial Intelligence Act

Privacy Research Team, Securiti
8 min readJan 5, 2023

--

1. Introduction

The European Commission tabled the proposal of the Artificial Intelligence Regulation on April 21, 2021. Since then, numerous Council Presidencies have recommended revisions and amendments to the Proposal. On 3rd May 2022, the European Parliament adopted the final text.

The final text will be voted on jointly by the Market and Consumer Protection (IMCO) and the Civil Liberties, Justice, and Home Affairs (LIBE) committees in late September. The final adopted text is available here. Once the final text is adopted by the European Parliament and member states, it will become directly enforceable across the EU.

The EU Artificial Intelligence Act is the first law on Artificial Intelligence in the world that aims to facilitate a single market for AI applications. It lays out general guidelines for applying AI-driven systems, products, and services within the EU territory with an aim to protect the fundamental rights and interests of individuals. The AI Regulation deals with the protection of both personal and non-personal data.

2. Who Needs to Comply with the Law

2.1 Material Scope

The Proposed Regulation applies to providers putting AI systems on the market and users of AI systems. Providers means entities that develop and place AI systems on the European Union market. It also applies to AI users and businesses that use AI systems for non-personal reasons.

2.2 Territorial Scope

This Regulation applies to:

  • Providers and users of AI systems in the European Union, regardless of whether they are based in the Union or a third country.
  • Providers and users of AI systems who are situated in a third country where the output produced by the system is used in the European Union.

2.3 Exemptions

The following are exempted from the EU Artificial Intelligence Act:

  • AI systems created or utilized solely for military purposes are exempt from this Proposed Regulation.
  • Public authorities in third countries and international organizations using AI systems under international agreements for law enforcement or judicial cooperation are also exempted from the scope of the Regulation.

3. Regulatory authority

The proposed legislation calls for establishing the European Artificial Intelligence Board (EAIB) as a new enforcement authority at the Union level. The EAIB will be responsible for establishing codes of conduct. Member states are also required to designate one or more national authorities to ensure compliance with the provisions of the Regulation.

4. Definitions of Key Terms

4.1 Artificial Intelligence System (AI system)

An Artificial Intelligence System (AI system) is software created using one or more of the techniques and approaches mentioned in Annex I that generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Annex I provides a finite list of software that is classified as AI systems, such as machine learning approaches, statistical approaches, logic, and knowledge-based approaches.

4.2 Provider

The provider is defined as a “natural or legal person,” “public authority,” “agency,” or “other body” that creates or commissions the creation of an artificial intelligence (AI) system to commercialize it or deploy it in service under its name or trademark, whether in exchange for money or for free.

4.3 Importer

An importer is defined as any natural or legal person based in the Union who sells or employs an AI system bearing the name or trademark of a legal or natural person based outside the Union;

4.4 User

User refers to any natural or legal person, public authority, agency, or other body using an AI system under its control, except when it is used during personal, non-professional activity.

4.5 Authorized Representative

Any natural or legal person established in the Union who has been permitted in writing by the maker of an AI system to conduct and carry out on that maker’s behalf the duties and processes specified by this Regulation.

4.6 Law Enforcement

Law enforcement refers to actions taken by law enforcement officials to prevent, investigate, uncover, or bring criminal charges or carry out criminal sentences, including defending against and preventing threats to public safety.

4.7 National Supervisory Authority

National supervisory authority refers to the body to which a Member State delegates responsibility for carrying out and enforcing this Regulation, coordinating the tasks assigned to that Member State, serving as the Commission’s sole point of contact, and speaking on that Member State’s behalf at the European Artificial Intelligence Board.

5. Obligations for Organizations Under EU Artificial Intelligence Act

Under the Proposed Regulation, AI systems are divided into four risk categories. The obligations of AI systems vary depending on the level of risk and the risk category they fall into.

Risk Category #1 — Unacceptable Risk AI Systems

Systems falling under this category clearly endanger people’s safety, livelihood, and fundamental rights of people. Such AI systems are prohibited to be used.

The following artificial intelligence systems cannot be placed, operated, or used:

  1. AI systems that use subliminal tactics beyond a person’s consciousness to materially distort their behavior in a way that harms them physically or psychologically or makes them more inclined to physical or psychological harm;
  2. The use of an AI system that materially exploits the behavior of a member of a particular group of people in a way that causes or is likely to cause the person physical or psychological harm; the exploitation of any vulnerability of a particular group of people due to their age, physical or mental disability;
  3. ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes. These are permitted only under certain limited conditions. For example, these are permitted only where these are considered strictly necessary to protect a substantial public interest that outweighs the risks to the rights and freedoms of individuals.
  4. AI systems used by or on behalf of public authorities to evaluate or categorize the trustworthiness of natural individuals over time based on their social behavior or known or predicted personal or personality characteristics, with the social score resulting in any or both of the following:
  5. Detrimental or unfavorable treatment of certain natural individuals or entire groups of them in social contexts which are not related to contexts in which the data was originally generated or collected;
  6. Detrimental or unfavorable treatment of certain natural beings or groups of them that is unwarranted or disproportionate to their social behavior.

Risk Category #2 — High-Risk AI Systems

This includes artificial intelligence systems that create a high risk to human safety, health, or fundamental rights of individuals. Such artificial intelligence (AI) systems are permitted to be used subject to certain conditions and ex-ante conformity assessment.

High-risk AI systems must ensure documentation, data quality, traceability, human oversight, data accuracy, cybersecurity, and robustness to limit any risks to the fundamental rights of individuals.

In addition, high-risk AI systems have transparency requirements, i.e., providers of those AI systems must inform end-users that they are interacting with an AI system. Moreover, organizations must conduct ex-ante conformity assessments prior to putting into market such high-risk AI systems.

The European Commission lists the following as high-risk AI systems:

  1. Critical infrastructures that could endanger citizens’ lives or health, such as transportation;
  2. A person’s access to education and career path may be influenced by their educational or occupational training (example: exam scores);
  3. Safety components of regulated products. Regulated products mean products that are covered in Annex II. (such as the use of AI in robot-assisted surgery);
  4. Employment, management of employees, and availability of self-employment (e.g., software for sorting CVs during recruitment processes);
  5. Essential private and public services, such as credit score that prevents citizens from getting loans;
  6. Law enforcement that may infringe on people’s fundamental rights (such as determining whether or not an item of evidence is reliable);
  7. Management of immigration, asylum, and border controls (such as checking the validity of travel documents);
  8. Judicial administration and democratic procedures (e.g., applying the law to a concrete set of facts).

Risk Category #3 — Limited Risk AI Systems

The AI systems that fall under this category must adhere to stringent disclosure requirements. Unless it is clear from the context and circumstances of the use, providers of such AI systems must ensure that natural persons are notified that they are engaging with an AI system.

This gives natural people the ability to decide for themselves whether to use the AI system in a particular situation or not. Users of the following AI systems, for instance, are required to be transparent:

  1. Users of biometric categorization systems or emotion recognition systems must tell the exposed natural persons of the system’s operation.
  2. Users of an AI system who create or change photos, audio, or video content that noticeably resembles real people, things, locations, or events or that would deceitfully appear to be authentic or true (deep fake technology) must disclose that the content has been artificially generated or manipulated.

The transparency obligations do not apply to AI systems that have been authorized by law for law enforcement purposes unless such systems are available for the public to report a criminal offense.

Risk Category #4 — Minimal Risk AI Systems

This category contains AI systems like spam filters or video games that use AI technology but pose little to no harm to citizens’ safety or rights. Most AI systems fit into this category, and the Regulation permits unrestricted use of these applications without imposing any new requirements.

6. Data Subject Rights

The AI Regulation Proposal is without prejudice to the GDPR, and it must be read together with the GDPR regarding the fulfillment of data subjects’ rights.

6.1 Right to be Informed

The AI Regulation requires high-risk AI systems and Limited risk AI systems to keep individuals informed that they are interacting with an AI system unless it is clearly evident from the context and circumstances of the use.

6.2 GDPR Rights

The organizations subject to the Proposed AI Regulation are required to facilitate data subjects’ rights fulfillment as per the provisions of the GDPR wherever personal data processing is involved. The AI Regulation complements Article 22 of the GDPR, which grants individuals the right to object to automated decision-making.

7. Penalties for Non-compliance

If the Regulation is violated, there could be serious penalties:

  1. Administrative fines of up to 30,000,000 EUR or 6% of the annual global turnover of the preceding financial year, whichever is higher, may be imposed for certain non-compliance actions, such as violations of the prohibition requirement on particular AI systems or the data governance requirements for high-risk AI systems.
  2. Other forms of non-compliance are subject to fines up to 20,000,000 EUR or 4% of the global annual turnover of the prior fiscal year, whichever is higher.
  3. A fine of up to 10,000,000 EUR or 2% of the annual global turnover of the prior financial year, whichever is higher, may be imposed for providing false or misleading information to regulatory bodies or national competent authorities in response to a request.

8. How an Organization Can Operationalize the Law

Organizations that process personal data through the use of AI systems must align their operations and ensure their practices comply with EU Artificial Intelligence Act by:

  • Determining lawful purposes for the implementation of an AI system and ensuring that data is being processed for those determined purposes with the help of an effective data mapping exercise,
  • Ensuring data remains minimal to what is required for the purpose by addressing risks around data storage and complying with data retention policies,
  • Informing data subjects that they are interacting with an AI system and the rights they have with the help of effective and dynamic privacy notices and policies,
  • Ensuring data security by taking appropriate security measures,
  • Conducting conformity assessments to identify risks prior to the implementation of the AI system and taking mitigation steps accordingly, and
  • Facilitating data subjects’ rights fulfillment and protection of personal data as per the provisions of the GDPR.

How can Securiti Help

As countries witness a profound transition in the digital landscape, automating privacy and security processes for quick action is essential. Organizations must become even more privacy-conscious in their operations and diligent custodians of their customer’s data.

Securiti uses the PrivacyOps architecture to provide end-to-end automation for businesses, combining reliability, intelligence, and simplicity. Securiti can assist you in complying with EU Artificial Intelligence Act and other privacy and security standards worldwide. Examine how it functions. Request a demo right now.

Source: https://securiti.ai/eu-artificial-intelligence-act/

--

--

Privacy Research Team, Securiti
Privacy Research Team, Securiti

No responses yet