Zum Hauptmenü Zum Inhalt

Law on Artificial Intelligence: new rules adopted


On March 13th, 2024, around three years after the publication of a proposal for a regulation to establish harmonized rules for Artificial Intelligence, the 27 EU member states approved the final proposal for a law on Artificial Intelligence, the so-called Artificial Intelligence Act (2021/0106[COD]) – in short: AI Act – after a lengthy voting procedure.

The use of Artificial Intelligence is on the rise and has become indispensable in many areas. Many companies are now wondering whether the AI Act also applies to them or the AI systems they use and, if so, what specific obligations they face. We provide an initial overview:

What is meant by an “Artificial Intelligence System”?

The AI Act regulates the placing on the market, putting into service and use of “Artificial Intelligence Systems” or simply “AI systems.” Therefore, to fall within its scope of application, the system used must fall under this broadly defined term. An AI system thus refers to a machine-based system that is designed to operate with varying degrees of autonomy, capable of adaptability after deployment and able to infer from the inputs it receives, for explicit or implicit objectives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

This definition includes not only software applications such as image analysis software, voice and facial recognition systems, spam filters and chatbots, but also “embedded” AI, such as robots used in medicine or autonomous vehicles.

The main difference between AI systems and simple, “traditional” software is that they are capable of learning, adapting dynamically to new situations and making complex decisions by analyzing large amounts of data. In contrast, conventional software operates statically based on the instructions defined during its programming, always executing the same functions and providing predictable output as a result.

Who does the AI Act apply to?

Virtually all companies in the value chain are subject to the scope of the AI Act, including in particular providers (i.e. developers), importers, distributors and operators (i.e. users) of AI systems in the EU. It should be emphasized that users are subject to the provisions of the AI Act not only if they are based in the EU, but also if the result produced by the AI system is used in the EU. Thus, the AI Act also applies to market participants from third countries.

However, there are also exceptions to the scope of the AI Act. In general, it does not apply to areas that are not covered by EU law, such as national security. It also does not apply to AI systems used exclusively for military purposes, or whose sole purpose is scientific research and development, or those published under free and open-source licenses (“Open-Source AI”, provided they are not marketed or put into operation as high-risk AI systems or prohibited AI practices). Natural persons who use the AI system as part of a personal and non-professional activity are also exempt; consequently, no special precautions need to be taken when using AI systems such as ChatGPT privately.

Do all AI systems adhere to the same rules under the AI Act?

The AI Act follows a risk-based approach. The higher the potential risks associated with an AI system, the more requirements must be met. In light of this, AI systems will be classified into four risk categories in the future: unacceptable risk (prohibited AI practices), high risk (high-risk AI systems), limited risk (AI systems intended for interaction with natural persons), and minimal and/or no risk (all other AI systems that do not fall within the scope of the AI Act). Depending on which risk category is met, certain rules apply.

In addition, the AI Act contains specific provisions for general-purpose AI models.

Which AI systems are prohibited?

Certain AI practices interfere particularly intensively with fundamental rights, thereby posing an unacceptable risk. As a result, they are strictly prohibited. These include:

  • AI systems for the purpose of cognitive behavioral manipulation;
  • AI systems that exploit the weaknesses of individuals (e.g. their age, disability, social or economic situation);
  • Biometric categorization systems that individually categorize natural persons on the basis of their biometric data to derive sensitive information (e.g. sexual orientation or religion);
  • AI systems for social scoring;
  • Biometric real-time remote identification systems in publicly accessible areas (exceptions may be made for law enforcement purposes under strict conditions);
  • AI systems for assessing the criminal risk of natural persons, e.g. purely on the basis of personality traits;
  • AI systems that create or expand databases for facial recognition through the untargeted reading of facial images from the internet or from video surveillance footage; and
  • AI systems for recognizing emotions in the workplace and in educational institutions.

What are high-risk AI systems?

AI systems that pose a high risk to the health and safety or fundamental rights of natural persons are considered high-risk. The classification can be complex in individual cases, as a dynamic system is provided for the classification of high-risk AI systems.

On the one hand, this includes AI systems that are used as safety components for products that are subject to certain EU product safety regulations (e.g. in vehicles or medical devices, see Annex II) or are themselves such a product. On the other hand, this includes certain areas of application listed in Annex III to the AI Act, which primarily affect fundamental rights. These include:

  • Biometric identification and categorization of individuals;
  • Critical infrastructures (e.g. in road traffic or in the water, gas, heat and electricity supply);
  • General and vocational education (e.g. when making decisions about access to educational institutions or for grading purposes);
  • Employment, personnel management and access to self-employment (such as assessment/selection for recruitment, decisions on promotions or dismissals);
  • Access to and use of essential private services and public services/benefits (e.g. with regard to obtaining/withdrawing subsidies, credit checks, dispatching/prioritizing the use of emergency and rescue services, risk assessment and pricing of life and health insurance);
  • Law enforcement (e.g. to assess the risk potential of criminals, when using lie detectors or to assess the reliability of evidence);
  • Migration, asylum and border control (e.g. when using lie detectors or to check the eligibility of asylum and visa applications and residence permits); or
  • Administration of justice and democratic processes (e.g. in the identification, application and interpretation of legal provisions).

The mentioned AI systems are not considered high-risk despite their classification in Annex III if they do not pose a significant risk to the health, safety, or fundamental rights of natural persons and also do not have a significant influence on the outcome of the decision-making process. Whether this is the case can be assessed by AI system providers themselves and must be documented accordingly. This harbors a potential risk for users, especially as they run the risk of using a high-risk AI system without complying with the relevant obligations if the provider makes an incorrect assessment.

What obligations exist for a high-risk AI system?

Insofar as companies use a “third-party” high-risk AI system as users, they must fulfill the following obligations in particular:

  • Implementing appropriate technical and organizational measures (TOMs) to ensure that the AI system is used in accordance with the instructions for use;
  • Ensuring human supervision by natural persons who have the necessary competence, training and authority as well as the necessary support;
  • Ensuring that input data are relevant and sufficiently representative of the intended purpose of the AI system;
  • Continuous monitoring in accordance with the instructions for use and ensuring that the use of the AI system is suspended if there is reason to believe that its application poses a disproportionate risk to the health, safety or protection of fundamental rights of individuals;
  • Reporting obligations when the use of the AI system is suspended and in the event of serious incidents;
  • Documentation obligations, particularly storing automatically generated logs for at least six months; and
  • Advance information of employees and employee representatives (e.g. works council) when a high-risk AI system is used in the workplace.

If a high-risk AI system is used by state institutions or private operators providing public services, a fundamental rights impact assessment must be carried out.

However, the majority of the many obligations specified in the AI Act regarding high-risk AI systems apply to providers of high-risk AI systems, namely those who have developed the AI system and placed it on the market or put it into operation under their own name or brand. The corresponding catalogue of duties can be found in Article 16 of the AI Act. In addition, numerous other obligations concerning high-risk AI systems are stipulated in the AI Act, which also affect other players, such as importers and distributors.

What applies if an AI system poses only a limited risk?

The AI Act “only” provides for transparency obligations with regard to certain AI systems whose risk is considered limited. For instance, AI systems designed for interaction with natural persons (e.g. chatbots) must be designed to inform individuals that they are interacting with an AI system, unless this is obvious due to the circumstances and the context of use. Of greatest practical relevance is that disclosure must be made if content has been artificially generated or modified by an AI system (so-called “Deep Fakes”). This includes images, videos and audio content as well as texts that are published for public information purposes.

What does minimal and/or no risk mean?

AI systems with minimal or no risk, such as spam filters or video games, do not fall within the scope of the AI Act.

What are “general-purpose AI models”?

During the negotiations on the AI Act, the handling of general-purpose AI models (GPAIs) was particularly controversial and regulations in this regard were only added in the latest draft of the AI Act. GPAIs are AI models that (i) display significant general applicability, (ii) are capable of competently performing a wide range of different tasks and (iii) can be integrated into a variety of downstream systems or applications. The best example of this are generative AI models such as GPT-4 from the US company OpenAI, which is included in ChatGPT.

For GPAIs, the AI ​​Act only provides for additional obligations for providers of such AI models, such as drawing up a technical documentation as well as drawing up information and documents for providers of AI systems who intend to integrate the GPAI into their AI system. There are also additional requirements for GPAIs that harbor inherent systemic risks. Such a systemic risk exists in particular when GPAIs have high performance (more than 10^25 FLOPS). In this case, providers must, for example, carry out model evaluations to identify and mitigate system risks, comply with reporting obligations in the event of serious incidents and ensure an appropriate level of cybersecurity.

For users of GPAIs, there are no obligations under the AI Act. However, users should exercise caution if a GPAI is used in an area listed in Annex III, for example, as the regulations for high-risk AI systems would apply in that case.

What sanctions are imposed for violations?

The AI Act stipulates hefty penalties for non-compliance. Violations involving prohibited AI systems can result in a fine of up to EUR 35 million or up to 7% of the total global annual turnover of the preceding financial year, whichever amount is higher. Violations of other obligations under the AI Act, such as the obligations imposed by a high-risk AI system, are punishable by fines of up to EUR 15 million or up to 3% of the total global annual turnover of the preceding financial year, depending on which is higher. If a company provides incorrect, incomplete or misleading information, this can be penalized with fines of up to EUR 7.5 million or up to 1.5% of the total global annual turnover of the preceding financial year, whichever is higher.

When will the new rules of the AI Act apply?

The AI Act will be effective on the twentieth day following its publication in the EU Official Journal, which according to rumors is expected to be between May and July 2024. Most of its provisions will then apply two years after it comes into force, with some provisions applying earlier. For example, the prohibited AI practices are expected to be applicable just six months after entry into force and the regulations regarding GPAIs just twelve months after entry into force. In contrast, the classification rules for high-risk AI systems and the corresponding obligations are expected to apply three years later.

What needs to be done now?

Companies are well advised to familiarize themselves with the relevant provisions of the AI Act as soon as possible and to identify the obligations relevant to them. For users of third-party AI systems, the first step will therefore regularly be to take stock of the AI systems used or planned for use. Furthermore, it should be determined which risk class the AI systems used or planned for use fall into and which requirements exist for their proper use. Even if it still seems a long time until the regulations are applicable, implementation should be started as early as possible, especially considering the significant penalties for non-compliance.

Our expert Birgit Meisinger and our experts from the Technology & Digitalization team will be happy to answer any further questions you may have on this topic.

Disclaimer

This article is for general information only and does not replace legal advice. Haslinger / Nagele Rechtsanwälte GmbH assumes no liability for the content and correctness of this article.

 

3. April 2024

 
Go back to News
  • Haslinger/ Nagele: JUVE Top Arbeitgeber Österreich 2025
  • Haslinger/ Nagele: JUVE Awards 2018: Kanzlei des Jahres Österreich
  • Haslinger/ Nagele: JUVE Top 20 Arbeitgeber 2024
  • Haslinger/ Nagele: Chambers Europe Top Ranked 2025 Logo
  • Legal500 EMEA Ranking Logo 2025
  • Promoting the best. Women in Law Award
  • Haslinger/ Nagele: Partner im CTC Cleantech Cluster
  • Haslinger/ Nagele: Mitglied Photovoltaic Austria