Artificial intelligence: universal remedy or liability trap?
Artificial intelligence is more than just a buzzword; the use of AI has not only arrived in industry, medicine and financial services, but also in private life. Alexa, vacuum cleaner robots and self-propelled lawn mowers have become indispensable helpers in many households.
In this way, AI takes over our personal and professional tasks, and solves complex problems using pattern and image recognition, speech recognition, robotics and machine learning. The nice thing about it: AI systems are intelligent and capable of learning, analyze their environment (primarily data) and act autonomously (at least to a certain extent). What may be convenient in terms of handling can become a liability trap. Because who is liable for mistakes? There is no explicit legal basis for claims and the jurisprudence has so far (hardly) dealt with the related legal issues.
A liability on the part of the manufacturer or of those who use AI to fulfill their own due diligence obligations (users) is possible.
- The implementation of both RL (EU) 2019/770 and RL (EU) 2019/771 clarifies that, within the framework of general warranty law, one takes responsibility for autonomous software programs or software agents and thus for AI regardless of culpability. The contracting party is fully liable for the contractual conformity of AI.
- Concerning programming errors, the manufacturer can also be held liable (fault-based liability). Due to the constant further development of AI systems, more difficult questions of liability law arise – where is the error that caused the damage, who caused it and which rules relating to the burden of proof should apply in this regard?
- Whether AI falls under general strict liability, which would open up the scope of a liability regardless of fault for damages, seems questionable. It is still unclear whether software is considered a product and is therefore subject to the provisions of the Product Liability Act. Even if it were, it would be questionable whether the manufacturer of AI is liable as being the final manufacturer when AI is integrated into a physical entity (e.g. self-driving car). In addition, product liability is generally only assumed for the condition that the product was in at the time it was placed on the market. Since AI is constantly evolving, it is questionable whether there should also be a liability for further developments.
But what applies to users who use AI to fulfill their own contractual obligations? Can they free themselves from their obligations by using AI? In some areas, the permissibility of the use of AI to fulfill (one’s own) due diligence obligations has already been legally defined; for example, due to the change in the Financial Market Anti-Money Laundering Act (“FM-GwG”), banks are allowed to use AI to monitor transaction flows in the future. At the same time, however, it was stated in the legal materials on § 7a FM-GwG that the use of AI does not exempt banks from liability for non-compliance with the provisions of the FM-GwG.
Further areas of application open up in connection with automated identification options (know-your-customer check). However, the problem here is that the supervisory authority has not yet been able to accept such customer identifications by means of AI solutions (unlike neighboring countries). In any case, it is encouraging that the EU has taken on the topic and is currently working on a legal framework for AI. A commission proposal is expected in spring.
31. March 2021
Go back to News