This article is part of our EU AI Act series which explores the effect of the AI Act across various industries and sectors.
Artificial Intelligence (AI) has revolutionized various industries, garnering considerable hype and widespread discussion. As AI technologies continue to evolve, their integration into various industries has profound implications, particularly for the labour market. The use of AI in employment raises critical questions about the possibility of job displacement, changes in skill requirements and ethical issues regarding bias, discrimination and privacy. Furthermore, AI can contribute to the creation of new employment opportunities. In response, the European Union (EU) has introduced the AI Act to ensure a competitive and inclusive labour market amid rapid technological advancements, thereby shaping the future of work across Europe.
The Use of AI in Employment
In the context of employment, AI systems are generally deployed at two key stages: pre-employment and during employment. At the pre-employment stage, AI systems are generally used in the recruitment and selection process, including placing targeted job advertisements, streamlining job applications as well as analysing prospective candidates. During employment, AI tools can impact internal decision-making and existing employee relations. At this stage, AI features in performance analysis and surveillance, task allocation based on personal features, and decisions related to promotions and termination.
Employers: Providers or Deployers?
Under the new framework, owing to their potential adverse effect on individuals’ health and safety and overall rights, the AI systems featuring in workplace settings are typically classified as high-risk. This high-risk classification necessitates the fulfilment of stringent requisites to ensure transparency, fairness, and non-discrimination. These requisites apply both to AI system ‘providers’ who develop such technology as well as ‘deployers’ who utilise it. Since providers are subject to more onerous duties, determining whether an employer is a deployer or a provider is essential. For the purposes of the AI Act, most employers are considered deployers and, as a result, are obliged to, amongst other duties:
- implement suitable technical and organisational measures to ensure adherence to the provider’s instructions;
- assign trained and competent physical persons to monitor the use of AI;
- supervise the workings of such technology and report any issues to the provider and national regulatory authorities;
- maintain records of logs produced by the AI system (if managed by the deployer);
- inform employees of the use of such systems, especially if such use will ultimately impact their employment.
Introduction of the Fundamental Rights Impact Assessment
In striving to achieve a human-driven approach and in addition to the aforementioned duties, deployers are obliged to conduct a Fundamental Rights Impact Assessment (FRIA) prior to using high-risk AI systems for the first time. This assessment aims to guarantee that such systems respect employees’ rights by identifying the potential risks on human rights and enabling employers to take the adequate measures to address them. The FRIA provides employers with the opportunity to think about the purpose underlying the high-risk AI system and the way it is to be deployed. The following entities are obliged to conduct a FRIA:
- bodies governed by public law;
- private entities providing public services;
- deployers of AI systems designed to assess the creditworthiness or establish credit scores for natural persons (exception: AI systems intended to detect financial fraud);
- deployers of AI systems used for risk assessment and pricing in life and health insurance for natural persons.
The FRIA serves to complement the Data Protection Impact Assessment (DPIA) under the GDPR Regulation (2016/679). The DPIA, however, is restricted to mitigating personal data risks whilst the FRIA takes a broader human rights approach that goes beyond data protection. Employers can therefore conduct both assessments in parallel with the aim of fulfilling their duties under both legislations.
Extraterritoriality
The AI Act’s scope extends beyond the EU, applying also to non-EU businesses that employ AI systems in decisions impacting EU based workforces. Furthermore, the Act is also applicable to non-EU providers of AI systems that are placed on the market or put into service within the EU. This extraterritorial scope carries significant implications for employers and should guide the drafting of contracts to ensure both parties are able to fulfil their obligations under this legislation.
Moving Forward – Steps Employers Should Take to Prepare
Employers who make use of high-risk AI systems as part of their operations, or are intending to do so, should make use of the AI Act’s implementation period to prepare for the new regulations. They must review their current and intended use of AI systems, evaluate the associated risks and conduct risk-based classifications. This process will assist in identifying the applicable obligations and establishing appropriate safeguards. Moreover, nurturing a level of AI literacy within the organisation is key. This can be achieved by drafting and developing AI-related internal policies, providing employee training on AI as well as supplying employees with transparent information about the AI tools being implemented. Adhering to these steps will help employers navigate the integration of AI into their operations effectively. Proactive compliance is crucial for harnessing AI’s potential whilst also safeguarding employees’ rights and shaping more transparent work environments.
If you are an employer seeking to ensure that your current or intended use of AI systems in the workplace is in line with the new Act, please feel free to reach out.
This is the second in a series of articles exploring the effect of the AI Act across various sectors and industries. You may read the first one here.