Implementing AI in the workplace: what are the risks? 

Integrating AI tools at work is not without danger. Three experts from the independent Belgian law firm, ALTIUS, explain how companies should best guard against ethical risks, data protection, intellectual property rights, and trade secrets. “It is essential that organisations teach their people to work with AI safely and responsibly.”

 

Automating tasks, generating strategic insights from data, increasing customer satisfaction with a chatbot… The benefits of smart AI tools are numerous, but many companies hardly consider the potential dangers and legal risks that AI brings.

 

The input data could contain personal data, which must comply with the GDPR’s rules and principles. “Trade secrets and confidential information can also be unintentionally disclosed if an AI system is fed with such information,” warns Jan Clinck. He is an ICT & Data Expert at ALTIUS, where a team of 70 specialists focuses on a pragmatic, personal approach.

 

“As more employees use AI tools, it is crucial that they do so responsibly and are well-trained, especially when training their own AI models with correct data.”

 

Intellectual property risks

 

Generative AI is trained using existing information and works (“input”) to generate new results (“output”). This raises legal questions, such as who owns the rights to these new creations. Another question, discussed in the ongoing case of the New York Times against OpenAI, is whether or not permission is needed to use input (such as news articles from the New York Times) to train ChatGPT’s AI.

 

Moreover, it cannot be ruled out that AI results sometimes resemble the input, in which case the issue of possible infringement of intellectual property rights could arise. It is also unclear whether AI results can be protected by intellectual property rights and who the holders of such rights might be.

 

“All these questions highlight the challenges that AI brings in terms of intellectual property, and what companies need to watch out for if they want to use AI in their activities,” emphasizes Sophie Lens, IP Expert at ALTIUS.

 

HR risks

 

Integrating AI into HR processes, such as recruitment and performance management, should also be done with effective preparation. “AI tools can unintentionally discriminate if the training data or criteria are not neutral,” warns Emma Van Caenegem, Employment Law Expert at ALTIUS. It is essential to integrate human judgment into AI-driven HR processes."

 

According to Jan Clinck, companies must also understand how their AI tools work to prevent discriminatory results. “The European legislator is also strongly focusing on responsibility and transparency among developers with the new AI Act to prevent these risks.”

 

Using AI in the workplace can also unintentionally result in a psychosocial risk. “AI frees up time within jobs to perform other tasks, but not everyone is prepared for this. It is also possible that the job becomes more complex precisely because AI takes over purely executive tasks,” says Emma Van Caenegem. “This can lead to increased pressure, stress, and ultimately burnout.”

 

AI policy

 

Companies can manage the risks by, for example, implementing clear policies, adopting an AI policy, and providing specific training to employees. Sophie Lens summarises it clearly: “Because AI legislation is still developing, companies must act proactively today to avoid legal and ethical issues.”

 

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots