Zum Hauptmenü Zum Inhalt
Digitalisierung | Haslinger / Nagele, Illustration: Karlheinz Wasserbacher

Watch your back – when AI has a say in employee evaluation


Authors: Markus Gaderer, Sabrina Hochradl

Employee evaluation is one of the most sensitive tasks within a company, as it can impact career advancement, development opportunities, or even continued employment. AI promises greater objectivity and efficiency: it analyzes data, identifies performance patterns, and delivers decision-making support at the push of a button. But what happens when AI decides who is considered high-performing – and who is not?

Is such a system even allowed to evaluate employees? And where do the legal boundaries lie between technological progress and impermissible control?

The use of AI is primarily regulated by the so-called AI Regulation (also known as the “AI Act”), which takes a risk-based approach. Two key regulatory pillars are particularly relevant to the area of employee evaluation: the prohibition of certain AI practices[1], and the classification as “high-risk AI”[2]. In particular, the use of AI for so-called “social scoring“, i.e., the evaluation of employees based on their social behavior or personal characteristics, is prohibited if this leads to discrimination. Likewise, the use of AI for emotion recognition in the workplace based on biometric data, such as facial expression or voice recognition, is prohibited.

AI systems used in decisions regarding promotions, dismissals, task allocation, or performance evaluations are considered “high-risk systems“.[3] Such systems may only be used under strict conditions. These include appropriate use, supervision by qualified personnel, continuous monitoring, informing employees about the use of the system, a data protection impact assessment[4] , and a registration requirement[5].[6]

While the AI Act does contain exceptions to the high-risk classification, these do not apply when the system is used for “profiling.”[7] This refers to any form of automated processing of personal data that serves to evaluate or predict certain personal aspects, such as work performance.[8] Since employee evaluations are typically based on such mechanisms, a high-risk classification can be assumed in most cases.

Regardless of the AI Regulation, the GDPR also sets limits. According to Article 22 of the GDPR[9] ], data subjects may not be subjected to a solely automated decision that legally or significantly affects them. However, this prohibition only applies to fully automated decisions. This means that if AI is used exclusively for automated assessments that lead, for example, to termination of employment, this is not permitted. The situation is different if a human being makes the decision, reviews the AI’s recommendation, takes additional information into account, and, if necessary, deviates from the AI’s assessment. In this case, there is no fully automated decision within the meaning of the GDPR.[10]

Austrian labor law requires the consent of the works council[11] for the implementation of technical systems for employee monitoring that affect human dignity. This includes, in particular, invasions of privacy and the right to data protection. Depending on the type, duration, scope, and intensity of the data processing by the AI system, such an interference may exist, making it a monitoring system that requires consent.[12]  If no works council is present, an individual agreement with the affected employees[13] is required.

Conclusion:

The use of AI systems for employee evaluation is only permissible under strict legal conditions. Systems based on “social scoring” or emotion recognition are generally prohibited. In most cases, AI used for evaluating employees qualifies as a “high-risk system,” whose use is subject to strict requirements. In addition to the obligations under the AI Act, companies must also observe the provisions of the GDPR and national labor law. Before implementing such systems, companies should therefore conduct a thorough legal review, involve the works council or conclude individual agreements, and ensure that human decision-makers remain part of the evaluation process.

Disclaimer

This article is for general information only and does not replace legal advice. Haslinger / Nagele Rechtsanwälte GmbH assumes no liability for the content and correctness of this article.


[1] Art 5 KI-VO.

[2] Art 6 KI-VO iVm Anhang III.

[3] Anhang III Z 4 lit b KI-VO.

[4] Art 35 DSGVO.

[5] Art 49 KI-VO.

[6] Art 26 KI-VO.

[7] Art 6 Abs 3 KI-VO.

[8] Art 4 Z 4 DSGVO.

[9] Art 22 DSGVO.

[10] Wolfgang Goricnik/Jens Winter, Kohärente oder inkohärente Anwendung des datenschutzrechtlichen Verbots automatisierter Entscheidungen und der Risiko-Klassifizierung der KI-VO?, Dako 2025/13.

[11] §96 Abs 1 Z 3 ArbVG.

[12] Susanne Auer-Mayer, Dürfen Arbeitnehmer*innen im “Home-Office” überwacht werden?, CuRe 2020/88.

[13] §10 Abs 1 AVRAG.

Author

Further information on this legal field can be found here

 

1. July 2025

 
Go back to News
  • Referenz | Haslinger / Nagele, Logo: JUVE Awards
  • JUVE Top 20 Wirtschaftskanzlei-Oesterreich
  • Chambers Europe Top Ranked 2025 Logo
  • Legal500 EMEA Ranking Logo 2025
  • Promoting the best. Women in Law Award