Search
Close this search box.

EU AI Act: Spearheading the Effort to Regulate Artificial Intelligence

Share

As AI is becoming more and more prominent in our lives, the European Union has introduced the first comprehensive regulation in this field: the EU AI Act. The widespread application of AI has created a growing demand for safeguards and transparency regarding AI-based systems. Consequently, the EU AI Act aims to ensure that fundamental rights, safety, and ethical principles are upheld.

The EU AI Act applies to providers of AI systems that are marketed or used in the EU, regardless of whether they are established in the EU, as well as to deployers, importers or distributors.

Yet to be published in the Official Journal, the EU AI Act will come into force 20 days after its publication. Most of the obligations provided in the EU AI Act will become applicable after two years, although certain provisions will take effect sooner.

Risk-based system

The AI Act categorizes AI systems according to the degree of risk they represent to people and society. This risk-based approach allows for differentiation between AI techniques that have an unacceptable risk, high-risk AI systems that must adhere to strict regulations, and lower-risk applications that must fulfil general duties. Consequently, the first step for most businesses will be to evaluate the AI that they use and how they use it, in order to determine the relevant requirements.

The AI Act classifies AI systems into four main categories:

  1. AI systems that pose unacceptable risks include AI-based systems that:
  • Use manipulative, deceptive and/or subliminal techniques to influence a person to make a decision;
  • Are implementing social scoring, which involves evaluating individuals based on their behavior, socio-economic background or personal traits;
  • Exploit vulnerabilities of persons due to their age, disabilities or specific social/economic situation in order to influence their behavior;
  • Use biometric data to categorize individuals based on their race, political opinion, religious or philosophical beliefs, sexual orientation etc.;
  • Create and expand facial recognition database through untargeted scraping or facial images from the internet or closed-circuit television.

The systems that pose an unacceptable risk are prohibited and are subject to the highest penalties, companies facing fines of EUR 35 million or 7% of the annual turnover, whichever is higher.

  1. High-risk AI systems are those that are determined to have a potentially negative impact on safety or fundamental rights, including biometric identification systems, employment evaluation or recruitment systems, educational or insurance systems.

Once established that certain systems are included in the high-risk category, some minimum requirements should be complied with:

  • Registration with the centralized EU database;
  • Having a compliant quality management system;
  • Maintaining adequate documentation and logs;
  • Undergoing relevant conformity assessment;
  • Complying with restrictions on the use of high-risk AI;
  • Continuing to ensure regulatory compliance and be prepared to demonstrate such compliance upon request.
  1. Limited risk AI systems present lower risks and need to adhere to less stringent safeguards, the main requirement being transparency. For example, a chatbot used providing customers with automated responses must be labelled accordingly.
  2. Minimal risk AI systems face lighter regulatory burdens because these pose minimal risks to individual rights, safety or societal values. While no specific requirements are mandated for this category, the adoption of voluntary codes of conduct is recommended.

General Purpose AI

GPAI (General purpose AI) is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market“.

These AI systems must adhere to transparency obligations. For example, AI systems intended to interact directly with humans must be clearly identified as such unless this is evident from the context. Additionally, certain GPAI models with potential for greater impact may be subject to further restrictions.

European Artificial Intelligence Board

A notable aspect of the EU AI Act is its emphasis on the protection of fundamental rights. It explicitly outlines safeguards against discrimination and bias, ensuring AI systems do not infringe upon human dignity, privacy, or autonomy. To facilitate compliance, the EU AI Act establishes a European Artificial Intelligence Board (EAIB), responsible for harmonizing standards and providing guidance to member states. This centralized oversight aims to streamline regulatory efforts and promote consistent implementation across the EU.

Legal framework in Romania

As far as Romania is concerned, a legislative proposal on AI is currently under legislative process. The proposal received criticism for being too descriptive, which may hinder its approval. Also, the proposal only briefly regulates the field, further steps to align national legislation with the EU AI Act being necessary. The legislative proposal only addresses the prohibition to use artificial intelligence for the automation of human resource processes, as well as the prohibition to use biometric data of individuals for purposes other than crime prevention and detection, without referring to any other provisions, obligations and prohibitions provided by the EU AI Act.

Conclusion

The EU AI Act marks a pivotal moment in AI regulation, reflecting a commitment to ethical AI development while balancing innovation and accountability. Its implementation is poised to reshape the future trajectory of AI within the European Union and beyond, setting a benchmark for responsible AI governance worldwide.

Download Article
Share the Post: