Artificial intelligence in the workplace: which technologies are prohibited, what risks the employer bears, and where the limits of what is permissible lie

Companies are increasingly introducing AI tools into the field of human resources management: systems for the automatic screening of candidates, algorithms for assessing productivity, predicting staff turnover, analysing employee behaviour, and allocating tasks on the basis of individual characteristics.

Such solutions promise objectivity and efficiency; however, within the European legal framework, their use directly engages the requirements of the EU AI Act and the GDPR, which define the limits of permissible processing of personal data. During the course of the employment relationship, workers’ rights are affected, which heightens the importance of regulation.

If a company uses tools that automatically analyse employees’ digital activity, task performance or communications, such processes are highly likely to be classified as profiling. This means that special legal safeguards are triggered.

Prohibition on emotion recognition in the workplace

One of the strictest provisions of the EU AI Act is the prohibition on the use of emotion-recognition systems in the workplace and in educational institutions. Article 5(1)(f) expressly prohibits the placing on the market, putting into service, and use of AI systems intended to identify a person’s emotional state in those contexts. The only exception is for medical or safety-related purposes.

This prohibition is based on several fundamental considerations.

There is a high risk of manipulation and discrimination. If an employer obtains a tool that determines an employee’s “loyalty”, “stress resilience” or “engagement” on the basis of facial expressions or intonation, this creates scope for subjective and potentially biased decisions.

The scientific reliability of such technologies remains disputed: emotion recognition is based on probabilistic models and does not guarantee an objective interpretation of a person’s internal state.

The analysis of facial expressions and voice in the course of work constitutes a serious interference with private life, particularly given the hierarchical dependence of the employee on the employer.

For employers, this prohibition means that the use of technologies to analyse employees’ emotions (for example, during video calls or interviews) will in most cases be unlawful. In this instance, the employee’s consent or any other lawful basis under the GDPR is irrelevant: the prohibition applies regardless.

High-risk systems in the field of employment

In addition to absolute prohibitions, the EU AI Act introduces the category of high-risk systems. Under Article 6 and Annex III to the Regulation, this includes AI tools used to make decisions concerning employment relationships, promotion or dismissal, the allocation of tasks on the basis of personal characteristics, as well as the monitoring and assessment of workers’ behaviour and performance.

If a system carries out profiling — that is, automated processing of data for the purpose of evaluating various aspects relating to an individual, including reliability, performance or behaviour — it automatically falls within the high-risk regime.

This means that its developer and provider are required to implement a risk-management system, ensure data quality, prepare technical documentation, provide for transparency and human oversight mechanisms, guarantee cybersecurity, and register the system in the European database. The employer using such a system also bears independent obligations in relation to its proper deployment and control.

In practice, most HR tools used for recruitment, performance assessment or employee monitoring may be recognised as high-risk systems under the EU AI Act.

Automated decisions in HR: the limits imposed by Article 22 GDPR

At the same time, Article 22 GDPR applies. It establishes the right of a person not to be subject to a decision based solely on automated processing if that decision produces legal effects concerning that person or similarly significantly affects them.

If AI automatically generates employee rankings, assesses their performance, or produces recommendations on dismissal or promotion, and a human merely formally approves the outcome without any genuine review, such a decision may be regarded as fully automated.

In that case, use of the system is possible only where genuine human involvement in the decision-making process is ensured and one of the specific grounds under the GDPR applies: necessity for the performance of a contract, express authorisation by law, or the worker’s explicit consent.

However, in the employment context, these exceptions are extremely limited and require strict interpretation.

For example, let us consider three scenarios involving the recording of work video calls.

In the first case, the issue is a simple recording without the use of AI. Subject to a proper balancing-of-interests assessment and transparency, such processing may be permissible. It is advisable to carry out a data protection impact assessment (DPIA).

In the second case, the recording is used together with AI for performance analysis, but without emotion recognition. In that event, the system may be classified as high-risk under the EU AI Act, and the employer must ensure human oversight and compliance with Article 22 GDPR.

In the third case, the recording is supplemented by an analysis of employees’ emotions. If the purpose is employee assessment, such use falls within the direct prohibition in Article 5(1)(f) of the EU AI Act and cannot be legitimised either by legitimate interests or by consent.

If your company uses AI tools for recruitment, performance assessment or employee monitoring, REVERA recommends carrying out a legal review of their compliance with the EU AI Act and the GDPR before deployment or scaling.

 

Author: Artem Handriko, Liudmila Yepikhava.

Contact a lawyer for further information

Contact a lawyer