Whistleblowing reporting systems
We help with implementation and processing!
On March 13th, 2024, around three years after the publication of a proposal for a regulation to establish harmonized rules for Artificial Intelligence, the 27 EU member states approved the final proposal for a law on Artificial Intelligence, the so-called Artificial Intelligence Act (2021/0106[COD]) – in short: AI Act – after a lengthy voting procedure.
The use of Artificial Intelligence is on the rise and has become indispensable in many areas. Many companies are now wondering whether the AI Act also applies to them or the AI systems they use and, if so, what specific obligations they face. We provide an initial overview:
The AI Act regulates the placing on the market, putting into service and use of “Artificial Intelligence Systems” or simply “AI systems.” Therefore, to fall within its scope of application, the system used must fall under this broadly defined term. An AI system thus refers to a machine-based system that is designed to operate with varying degrees of autonomy, capable of adaptability after deployment and able to infer from the inputs it receives, for explicit or implicit objectives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
This definition includes not only software applications such as image analysis software, voice and facial recognition systems, spam filters and chatbots, but also “embedded” AI, such as robots used in medicine or autonomous vehicles.
The main difference between AI systems and simple, “traditional” software is that they are capable of learning, adapting dynamically to new situations and making complex decisions by analyzing large amounts of data. In contrast, conventional software operates statically based on the instructions defined during its programming, always executing the same functions and providing predictable output as a result.
Virtually all companies in the value chain are subject to the scope of the AI Act, including in particular providers (i.e. developers), importers, distributors and operators (i.e. users) of AI systems in the EU. It should be emphasized that users are subject to the provisions of the AI Act not only if they are based in the EU, but also if the result produced by the AI system is used in the EU. Thus, the AI Act also applies to market participants from third countries.
However, there are also exceptions to the scope of the AI Act. In general, it does not apply to areas that are not covered by EU law, such as national security. It also does not apply to AI systems used exclusively for military purposes, or whose sole purpose is scientific research and development, or those published under free and open-source licenses (“Open-Source AI”, provided they are not marketed or put into operation as high-risk AI systems or prohibited AI practices). Natural persons who use the AI system as part of a personal and non-professional activity are also exempt; consequently, no special precautions need to be taken when using AI systems such as ChatGPT privately.
The AI Act follows a risk-based approach. The higher the potential risks associated with an AI system, the more requirements must be met. In light of this, AI systems will be classified into four risk categories in the future: unacceptable risk (prohibited AI practices), high risk (high-risk AI systems), limited risk (AI systems intended for interaction with natural persons), and minimal and/or no risk (all other AI systems that do not fall within the scope of the AI Act). Depending on which risk category is met, certain rules apply.
In addition, the AI Act contains specific provisions for general-purpose AI models.
Certain AI practices interfere particularly intensively with fundamental rights, thereby posing an unacceptable risk. As a result, they are strictly prohibited. These include:
AI systems that pose a high risk to the health and safety or fundamental rights of natural persons are considered high-risk. The classification can be complex in individual cases, as a dynamic system is provided for the classification of high-risk AI systems.
On the one hand, this includes AI systems that are used as safety components for products that are subject to certain EU product safety regulations (e.g. in vehicles or medical devices, see Annex II) or are themselves such a product. On the other hand, this includes certain areas of application listed in Annex III to the AI Act, which primarily affect fundamental rights. These include:
The mentioned AI systems are not considered high-risk despite their classification in Annex III if they do not pose a significant risk to the health, safety, or fundamental rights of natural persons and also do not have a significant influence on the outcome of the decision-making process. Whether this is the case can be assessed by AI system providers themselves and must be documented accordingly. This harbors a potential risk for users, especially as they run the risk of using a high-risk AI system without complying with the relevant obligations if the provider makes an incorrect assessment.
Insofar as companies use a “third-party” high-risk AI system as users, they must fulfill the following obligations in particular:
If a high-risk AI system is used by state institutions or private operators providing public services, a fundamental rights impact assessment must be carried out.
However, the majority of the many obligations specified in the AI Act regarding high-risk AI systems apply to providers of high-risk AI systems, namely those who have developed the AI system and placed it on the market or put it into operation under their own name or brand. The corresponding catalogue of duties can be found in Article 16 of the AI Act. In addition, numerous other obligations concerning high-risk AI systems are stipulated in the AI Act, which also affect other players, such as importers and distributors.
The AI Act “only” provides for transparency obligations with regard to certain AI systems whose risk is considered limited. For instance, AI systems designed for interaction with natural persons (e.g. chatbots) must be designed to inform individuals that they are interacting with an AI system, unless this is obvious due to the circumstances and the context of use. Of greatest practical relevance is that disclosure must be made if content has been artificially generated or modified by an AI system (so-called “Deep Fakes”). This includes images, videos and audio content as well as texts that are published for public information purposes.
AI systems with minimal or no risk, such as spam filters or video games, do not fall within the scope of the AI Act.
During the negotiations on the AI Act, the handling of general-purpose AI models (GPAIs) was particularly controversial and regulations in this regard were only added in the latest draft of the AI Act. GPAIs are AI models that (i) display significant general applicability, (ii) are capable of competently performing a wide range of different tasks and (iii) can be integrated into a variety of downstream systems or applications. The best example of this are generative AI models such as GPT-4 from the US company OpenAI, which is included in ChatGPT.
For GPAIs, the AI Act only provides for additional obligations for providers of such AI models, such as drawing up a technical documentation as well as drawing up information and documents for providers of AI systems who intend to integrate the GPAI into their AI system. There are also additional requirements for GPAIs that harbor inherent systemic risks. Such a systemic risk exists in particular when GPAIs have high performance (more than 10^25 FLOPS). In this case, providers must, for example, carry out model evaluations to identify and mitigate system risks, comply with reporting obligations in the event of serious incidents and ensure an appropriate level of cybersecurity.
For users of GPAIs, there are no obligations under the AI Act. However, users should exercise caution if a GPAI is used in an area listed in Annex III, for example, as the regulations for high-risk AI systems would apply in that case.
The AI Act stipulates hefty penalties for non-compliance. Violations involving prohibited AI systems can result in a fine of up to EUR 35 million or up to 7% of the total global annual turnover of the preceding financial year, whichever amount is higher. Violations of other obligations under the AI Act, such as the obligations imposed by a high-risk AI system, are punishable by fines of up to EUR 15 million or up to 3% of the total global annual turnover of the preceding financial year, depending on which is higher. If a company provides incorrect, incomplete or misleading information, this can be penalized with fines of up to EUR 7.5 million or up to 1.5% of the total global annual turnover of the preceding financial year, whichever is higher.
The AI Act will be effective on the twentieth day following its publication in the EU Official Journal, which according to rumors is expected to be between May and July 2024. Most of its provisions will then apply two years after it comes into force, with some provisions applying earlier. For example, the prohibited AI practices are expected to be applicable just six months after entry into force and the regulations regarding GPAIs just twelve months after entry into force. In contrast, the classification rules for high-risk AI systems and the corresponding obligations are expected to apply three years later.
Companies are well advised to familiarize themselves with the relevant provisions of the AI Act as soon as possible and to identify the obligations relevant to them. For users of third-party AI systems, the first step will therefore regularly be to take stock of the AI systems used or planned for use. Furthermore, it should be determined which risk class the AI systems used or planned for use fall into and which requirements exist for their proper use. Even if it still seems a long time until the regulations are applicable, implementation should be started as early as possible, especially considering the significant penalties for non-compliance.
Our expert Birgit Meisinger and our experts from the Technology & Digitalization team will be happy to answer any further questions you may have on this topic.
This article is for general information only and does not replace legal advice. Haslinger / Nagele Rechtsanwälte GmbH assumes no liability for the content and correctness of this article.
3. April 2024
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Turnstile. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Instagram. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information