Use of AI Systems by Employees
AI systems and chatbots are useful tools for employees. AI systems and chatbots can, for instance, generate ideas, assist as search and translation tools and produce drafts of documents and emails, etc.
However, there are also risks involved with employees' use of AI systems. An employee may, for instance, provide the AI system or chatbot with information entailing an unintended disclosure of trade secrets, personal data, or other confidential information.
Realizing that employees will, in all probability, make use of AI systems and chatbots in connection with their work, it is relevant for an employer to consider whether it would be useful to prepare guidelines regarding the use of AI systems by the employees.
By means of guidelines and policies, the responsible use can be regulated in order to avoid violations of data protection rules, disclosure of confidential information, etc.
The employer can, for instance, introduce a policy which - based on industry-specific considerations - allows employees to use AI systems and chatbots. Such use may be restricted to e.g. search and translation tools and within reasonable limits the drafting of presentations, emails and documents.
In such a policy, it would be natural to lay down specific "Do's and Dont's". For instance, that the employee never may provide an AI system with personal data or confidential information, and that the employee always must make a subsequent human processing of any output.
Use of AI Systems by the Employer
For employers, AI systems can also be useful and efficient tools. Already now, AI systems are often used as managerial tools. AI systems can, for instance, be used as tools for CV screening, online training of new employees, and optimisation of work processes.
The employer's use of AI systems must take place within the framework of employment law, including the employer's managerial right and working environment legislation. The predominant principle in this respect is that employment law rules applicable to the physical world also are applicable to the digital world.
The employer must be aware of the risk that the AI system applied may discriminate employees on basis of illegal criteria. At the design of an AI system, it can be tested beforehand that certain protected criteria are not included in the basis of evaluation of the system. However, bearing in mind that the protection against indirect discrimination is not the same for all employees, it may be difficult to ensure in advance that the AI system is not attaching importance to criteria which in a specific case would have an indirect discriminating effect.
Furthermore, the employer must be very careful to observe the provisions of the GDPR as well as the pending AI-Act.
Pending AI-Act
On 8 December 2023, the European Parliament and the Council reached a provisional agreement on the AI-Act . The Act is expected to enter into force in its entirety during the first half of 2026.
The future AI-Act classifies AI systems which, inter alia, are used in connection with the employment and administration of labour as high-risk systems. This means e.g. AI systems applied for the recruitment and selection of persons, for making decisions on promotion and dismissal, allocation of tasks based on individual behaviour, monitoring and evaluation of employees, etc.
An AI system categorised as a high-risk system implies some responsibilities for the employer. The AI-Act not only entails responsibilities for the suppliers of such AI system, but also for employers who only are users of the system.
Do you want to learn more?
If you want to learn more about the use of AI systems in an employment law context, you are welcome to join Plesner's free webinar on the AI-Act and GDPR in HR administration. The webinar is held on 3 April 2024 at 09.30 - 10.30 a.m.
You can read more about the webinar and sign-up here
You are also welcome to contact Plesner's team for Employment and Labour Law if you need any further information or advice on the significance of AI within employment law and the pending regulation thereof.