Zum Hauptmenü Zum Inhalt

Regulation of AI – legislative process at EU level reaches the final stage


Those who have already used the ChatGPT speech robot to formulate a text have likely been amazed. It would thus be quite conceivable to have ChatGPT do the German homework. However, it is quite possible that the deception will quickly come to light because currently, ChatGPT is not sophisticated enough to imitate the usual writing style or occasional spelling errors. So, it’s a good thing that the potential of Artificial Intelligence (AI) is not solely limited to doing homework. The field of application for AI is virtually endless. At the same time, the use of AI raises substantial questions, and the call for regulation is becoming louder and louder.

The European Commission’s proposal to regulate Artificial Intelligence through the Artificial Intelligence Act (AI Act) (COM/2021/206) has been available since 2021. On June 14th, 2023, the European Parliament published its final position on the AI Act to much media acclaim. Thus, with the draft of the European Commission and the position of the Council of the European Union now available, the negotiations (called “Trilogue”) on the final content of the European framework for Artificial Intelligence can begin.

What falls under AI?

The Commission’s draft of the AI Act defines the term ” Artificial Intelligence system” (AI system) quite broadly, which has been subject of criticism since the draft was presented. Critics argue that the definition of AI is too broad and distinguishing it from software is hardly possible. Europe would primarily achieve one thing with it: namely, legal uncertainty and a climate of hostility to innovation that leads to competitive disadvantages, according to numerous statements from literature and practice. So far, however, the EU legislator seems little impressed by this: The definition of an AI system was even more broadly defined. According to the EU Parliament’s position, an AI system is a machine-based system that operates with varying degrees of autonomy and can generate predictions, recommendations or decisions that influence the environment (physically or virtually).

Risk-based approach: the higher the risk, the stricter the regulations

The AI Act classifies AI into four risk groups, from minimal to unacceptable risk. The extent of the restrictions should be adapted to the respective risk potential: the higher the risk, the stricter the regulations. Certain practices, such as “social scoring”, the evaluation of people’s behavior within a society, are subject to total ban.

The focus of the AI Act is on the regulation of high-risk AI systems. According to the opinions available to date, AI systems are considered high-risk AI if, among other things, they can have significant detrimental effects on health, safety, and fundamental rights. Annex III of the AI Act provides examples of such systems, including “fundamental private and public services”. For instance, creditworthiness assessments will be considered high-risk systems.

However, in an administrative procedure, it should also be possible to determine that an AI system is not considered high-risk.

Consequences for the provider / user of AI

Anyone who offers an AI, meaning developing the AI or having it developed in order to place it on the market or put it into operation under their own name (provider), must ensure compliance with the requirements for high-risk AI and set up a risk management system that has to meet specific requirements (e.g., quality management and registration obligations, documentation obligation, CE marking). There are special requirements for Generative AI (such as ChatGPT) (including transparency, public documentation when using copyrighted training data).

However, users of high-risk AI have obligations as well, such as monitoring obligations based on the instructions for use and information obligations to the provider in the event of serious incidents; in some cases, notifying the market supervisory authority may even be required. Once users modify high-risk AI or change its intended purpose, they are subject to the same obligations as a provider of high-risk AI.

Failure to comply can result in severe penalties, which may even exceed those of the GDPR.

Who is liable if the AI makes a mistake?

The question of who is responsible for damage caused by an AI system is also open. The proposal to issue a directive to adapt the provisions on non-contractual civil liability for AI (COM/2022/496, AI Liability Directive) aims to facilitate the enforcement of claims in cases of non-contractual damages. Under certain circumstances, it may be ordered by a court that evidence related to a high-risk AI must be disclosed (the results can subsequently also be used to assert contractual claims for compensation). In addition, there is a rebuttable presumption that there is a causal link between fault and the result produced by the AI system.

There will also be an amendment to the Directive on Liability for Defective Products (COM2022/495) and the Regulation on Machinery Products (Machinery Regulation) (COM2021/0105).

When do the regulations take effect?

It is still uncertain whether the AI Act and the AI Liability Directive will be passed in 2023. For legal practitioners, the regulations are expected to take effect after a further 24 months, with the AI Liability Directive requiring prior implementation at the national level.

Cooperation with 7lytix

In collaboration with the renowned Linz-based software development company 7lytix GmbH, under the direction of Franziskos Kyriakopoulos, we offer a newly developed due diligence examination specifically for the use of Artificial Intelligence. This supports companies in the acquisition of such systems by evaluating the technical capabilities or maturity of the use of AI on the one hand and, on the other hand, showing whether its use complies with current and, above all, future applicable legal regulations.

Additionally, we are proud supporters of the Upper Austrian AI Venture Hub, where the AI community in Upper Austria has come together to jointly address the many exciting questions surrounding the use of AI. A review of the last Upper Austrian AI Venture Hub kick-off event in March can be found here. More information about AI from 7lytix and us, among others, will be available at the 2nd AI Ventures Event on September 28th, 2023, where, in addition to international guests, State Secretary Florian Tursky, MSc. MBA, will be present.

Disclaimer

This article is for general information only and does not replace legal advice. Haslinger / Nagele Rechtsanwälte GmbH assumes no liability for the content and correctness of this article.

 

22. June 2023

 
Go back to News
  • JUVE Top Arbeitgeber 2024
  • Referenz | Haslinger / Nagele, Logo: JUVE Awards
  • JUVE Top 20 Wirtschaftskanzlei-Oesterreich
  • Promoting the best. Women in Law Award