The AI Act’s Impact on Businesses Operating Within the EU 

October, 2024 - Andrea Theuma, Tessa Borg Bartolo and Sofia Said Salomone

 

This article is part of our EU AI Act series which explores the effect of the AI Act across various industries and sectors.

Introduction

The first article (see link below) in this EU AI Act series provided, inter alia, a breakdown of the scope, applicability, timeline and risk levels of the AI Act, Regulation (EU) 2024/1689 [1] (hereinafter referred to as the “Act”). The Act introduces significant obligations for all businesses developing or deploying AI that affect persons in the EU, with major fines for non-compliance. Crucially, the Act is extraterritorial in nature, since it also applies to businesses established in third countries which implement AI systems within the EU market. In order to meet requirements on time, businesses must quickly establish organisation-wide AI governance and capabilities, primarily by understanding the different AI categories, regulatory timelines and intricacies of the Act. This article is geared towards providing local businesses with an overview of the salient obligations and anticipated impacts thereon resulting from the Act.

Brief synopsis of the scope of the Act

The AI Act introduces a comprehensive regulatory framework designed to ensure the safe and ethical use of AI technologies. The AI Act encompasses two main types of AI: “AI systems” [2] and “General Purpose AI models” (hereinafter referred to as “GPAI” [3] models).

Step plan for businesses operating within the EU

Step 1: Identify the role of your business in terms of the Act

The first port of call for a business operating within the EU is to understand which type of operator the business is classified under in terms of the Act. The Act distinguishes between the following operators:

  • Providers: A provider is any person or entity responsible for the development of an AI system or GPAI model and responsible for introducing it to the EU market under its own name or trademark. Providers bear the brunt of the Act’s obligations, including ensuring compliance with technical documentation, risk management and conformity assessments prior to placing AI systems on the market.
  • Deployers: A deployer is any person or entity that utilises AI systems within their operations. Deployers are generally tasked with ensuring that the systems they use comply with the Act’s requirements, particularly when these systems are classified as high-risk. Deployers must collaborate with providers to maintain ongoing compliance and may be held to similar standards if they materially modify or rebrand AI systems.
  • Importers of AI systems: An importer is any person or entity based in the EU who introduces an AI system into the EU market which bears the name or trademark of a person or entity established in a third country.
  • Distributors of AI systems: A distributor is any person or entity within the supply chain, excluding a provider or importer, who makes an AI system available on the EU market.

The distinction between providers and deployers is crucial since the bulk of regulatory obligations are incumbent upon providers. However, the line between a provider and a deployer may blur in practice. For instance, if a deployer materially customises a previously implemented AI system, they may be considered a provider under the Act and thereby inherit all the associated regulatory responsibilities of a provider.

Step 2: Identify the risk category of the AI system

The Act provides for different risk levels of AI systems, which may be categorised into four (4) risk categories: high risk, limited risk, minimal risk and unacceptable risk. For a full description of the four risk categories, please refer to our first article (see link below) in this series. The Act also provides an additional risk category specifically in relation to GPAI in the form of GPAI systemic risk. It is crucial for businesses to understand whether an AI system lies on the risk spectrum, as this will, in turn, establish specific obligations related to the risks under the Act. In brief, the risk categories may be described as follows:

  1. Unacceptable risk AI systems [4]: AI systems which create a serious threat to fundamental rights and safety are explicitly banned by the Act. Examples of such unacceptable risk AI systems are those used in connection with biometric categorisation, social scoring and manipulative cognitive behaviour systems.
  2. High-risk AI systems [5]: An AI system is classified as high-risk when it is intended to be used as a safety component of a product, or is a product, covered by Union harmonisation legislation as listed under Annex I of the Act (for example, Regulation (EU) 2017/745 on medical devices) and such product is required to undergo a third-party conformity assessment. Additionally, an AI system in the fields listed under Annex III of the Act is considered to be high-risk, including for example AI systems used in biometrics, critical infrastructure and education and vocational training.
  3. Limited risk AI systems: AI systems posing a lesser risk are required to comply with transparency and labelling requirements in order for users to be made aware that they are interacting with AI so as to make informed decisions. Some examples of limited risk AI systems include chatbots such as ChatGPT and image generators.
  4. Minimal risk AI systems: AI systems which pose negligible risk, such as AI video games and spam filters, are free to be used without restriction.

GPAI with systemic risk: The Act places importance on GPAI models and classifies GPAI systemic risk as a distinct risk category [6] . Where GPAI are considered to carry systemic risk in terms of the Act, the providers of the GPAI are required to comply with more stringent obligations, including reporting obligations, obligations related to the carrying out of ongoing assessments and the requirement of technical documentation in order to implement governance and transparency measures.

Step 3: Understand and implement applicable regulatory obligations

Once a business understands under which operator it is classified in terms of the AI supply chain and understands the risk category of its respective AI system, the business must then understand the ensuing obligations applicable to the scenario at hand. The Act provides for obligations arising out of the risk category of the AI system itself and also provides for obligations linked to the particular operator of a particular risk category of AI systems, as follows:

A. Unacceptable AI systems

Unacceptable risk AI systems are prohibited from being placed on the market or put into service [7] . Some examples of prohibited AI practices are systems that purposefully manipulate human behaviour with the effect of impairing a person’s ability to make an informed decision, social scoring systems and biometric identification used for mass surveillance.

B.1 High-risk AI systems

High-risk AI systems are to generally comply with obligations as set out under Section 2 [8] of the Act. Such obligations include:

  • Risk management: Implementing and maintaining a dedicated risk management system to constantly identify, evaluate and mitigate risks throughout the AI system’s lifecycle.
  • Data governance: Ensuring the quality of data in AI systems involving the training of AI models through training, validation and testing processes.
  • Technical documentation: Documentation must be drawn up prior to the placing on the market of the AI system in order to demonstrate regulatory compliance and allow for proper compliance assessments.
  • Event logging and record keeping: AI systems must allow for automatic logging of events over the AI system’s lifetime to ensure traceability of operations, including that of usage, data and personnel identification.
  • Transparency and provision of information to deployers: AI systems must be developed in a manner that ensures that their operation is sufficiently transparent to enable deployers to adequately make use of the system and make accurate use thereof.
  • Human oversight: High-risk AI systems must be developed in such a way as to allow effective human oversight so as to monitor and manage the system’s performance.
  • Accuracy, robustness and cybersecurity: High-risk AI systems must meet stringent ongoing requirements for accuracy, robustness and cybersecurity in order to minimise risks associated therewith.

B.2 Obligations incumbent upon providers of high-risk AI systems

Providers of high-risk AI systems are obliged to comply with the following obligations [9]:

  • To ensure compliance with the obligations set out under Section 2 of the Act, as described hereinabove;
  • To indicate on the high-risk AI system the provider’s registered trade mark or trade name;
  • To keep a quality management system [10] in place;
  • To keep certain documentation [11] for a period of ten (10) years from placing the high-risk AI system on the market;
  • To keep logs automatically generated [12] by the high-risk AI system;
  • To ensure that the high-risk AI system undergoes conformity assessment [13] procedures prior to placing the same on the market;
  • To draw up an EU declaration of conformity [14];
  • To attach the CE marking [15] to the high-risk AI system to indicate its conformity with the Act;
  • To comply with registration [16] obligations of the high-risk AI system in a publicly accessible EU database;
  • To take any corrective actions as are necessary and provide information [17] as required by the Act;
  • To demonstrate the conformity of the high-risk AI system upon a reasoned request from a national competent authority [18];
  • To ensure that the high-risk AI system meets accessibility standards in accordance with Directives (EU) 2016/2102 and (EU) 2019/882;
  • Where providers are established in third countries, to appoint an authorised representative [19] who is established within the EU; and
  • To comply with post-market monitoring [20] ongoing obligations.

B.3 Obligations incumbent upon deployers of high-risk AI systems

Deployers of high-risk AI systems are obliged to comply with the following obligations [21] :

  • To take the necessary measures to ensure that the systems are in accordance with their instructions for use;
  • To implement human oversight by qualified personnel;
  • To ensure that input data is relevant to the system’s intended purpose, insofar as the deployer exercises control over the input data;
  • To suspend use of the system should it present risks at a national level;
  • To keep logs automatically generated by the high-risk AI system;
  • To inform workers’ representatives of the use of high-risk AI systems where the deployers are employers;
  • To carry out a data impact assessment under article 35 of Regulation (EU) 2016/679 (GDPR);
  • To submit annual reports to relevant authorities on the use of post-remote biometric identification systems;
  • To inform natural persons of the use of high-risk AI systems where the deployers deploy high-risk AI systems which make decisions or assist in making decisions related to such natural persons;
  • To cooperate with competent authorities in any action taken by those authorities in relation to the implementation of the Act;
  • To conduct a fundamental rights impact assessment [22] prior to placing the high-risk AI system on the market and to fulfil GDPR obligations in relation to data protection impact assessments.

B.4 Obligations incumbent upon importers and distributors of high-risk AI systems [23]

Importers and distributors of high-risk AI systems are obliged to verify that the high-risk AI system is in conformity with the provisions of the Act, including verification that the provider has carried out the conformity assessment procedure, has drawn up the technical documentation and has appointed an authorised representative, and that the high-risk AI system bears the required CE marking and EU declaration of conformity.

B.5 Scenarios where an operator may be considered to be a provider of a high-risk AI system [24]

Any distributor, importer, deployer or third party is considered to be a provider of a high-risk AI system and will be subject to the same obligations as the provider in the following circumstances:

(a) Should the operator place their name or trademark on a high-risk AI system already available on the market or in operation, regardless of any contractual agreements stipulating obligations which are otherwise allocated;
(b) Should the operator implement substantial modifications to a high-risk AI system already on the market or in use in such a manner that the system continues to meet the criteria for classification of a high-risk AI system; or
(c) Should the operator modify the intended purpose of an AI system, including GPAI systems not previously classified as high-risk, in such a way that the system qualifies as high-risk AI system after being introduced to the market or put into service.

C. Limited risk AI systems

Limited risk AI systems are regulated in terms of transparency obligations. In comparison to high-risk AI systems, limited risk AI systems pose a lesser risk and the focus of the obligations on limited risk AI systems is placed on informing users, and ensuring awareness, of the presence of such an AI system and labelling deepfake content for the purposes of transparency.

D. Minimal-risk AI systems

AI systems falling in this category have minimal regulatory requirements which are limited to general product safety standards. This notwithstanding, the creation and use of voluntary codes of conduct is being encouraged.

Concluding remarks: the Act’s projected effects on local businesses

The Act is applicable along the entire value chain and covers a very wide scope of stakeholders, meaning that most organisations using AI, including those located outside the EU, will fall within its scope. As discussed above, complying with the Act will require a great deal of preparation for businesses in scope, particularly those developing higher risk AI systems and GPAI.

It is highly recommended that local businesses create an inventory of all AI systems which they have – or plan to have – and determine whether any such AI system falls within the scope of the Act.

In turn, one should assess and categorize the in-scope AI systems to determine their risk classification and identify applicable compliance requirements. Once a business understands its role and position in the relevant AI value chain, the associated compliance obligations and how such obligations will be met, compliance should be embedded within all responsible functions along the value chain, throughout the AI system lifecycle. It is recommended that holders of key roles within businesses, such as Boards of directors, Committee members and persons generally responsible for the development, use and compliance of AI systems become familiar with the obligations applicable to their respective business to ensure a smooth implementation of such compliance requirements. Codes of conduct, alignment of standards and enhanced transparency should be emphasised in the lead-up to the coming into force of the respective provisions in order to minimize and mitigate risks associated with the use of AI systems. Additionally, a local business should respectively consider what other risks and opportunities the Act poses to its current operations and strategy, including (i) the Act’s interaction with other EU or non-EU regulations and (ii) access to AI research and development channels. The Act itself, in fact, aims to foster the proper use of AI systems across Member States and provides for the use of regulatory sandboxes [25] , or controlled testing environments, by means of which testing of AI systems prior to placing them on the market is encouraged.

Although it may be anticipated that some of the stringent obligations placed on high-risk AI systems may cause some obstacles for smaller companies to break into the AI market, compliance with the provisions of the Act is likely to provide businesses with a competitive edge and to promote the ethical and responsible implementation of AI systems. This being said, compliance can be achieved by developing and executing a plan to ensure that appropriate accountability and governance frameworks, risk management and control systems, quality management, monitoring and documentation are in place when the tiered compliance obligations arising from the Act take effect.

For information on non-compliance with the Act and applicable fines, the timeline for the coming into force of certain provisions of the Act and AI systems falling outside the scope of the Act, please visit our first article in this series.

This is the third in a series of articles exploring the effect of the AI Act across various sectors and industries.

 

Click here for the first article in this series

 

Click here for the second article in this series

 

Footnotes:

 

[1] Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence.
[2] Defined under article 3 of the Act as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
[3] Defined under article 3 of the Act as “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”.
[4] Regulated under article 5 of the Act.
[5] Classification rules for high-risk AI systems are regulated under article 6 of the Act.
[6] Regulated under article 51 of the Act.
[7] Regulated under article 5 of the Act.
[8] Section 2 – Requirements for high-risk AI systems, articles 8 – 15 of the Act.
[9] Regulated under article 16 of the Act.
[10] Regulated under article 17 of the Act.
[11] Regulated under article 18 of the Act.
[12] Regulated under article 19 of the Act.
[13] Regulated under article 43 of the Act.
[14] Regulated under article 47 of the Act.
[15] Regulated under article 48 of the Act.
[16] Regulated under article 49 of the Act.
[17] Regulated under article 20 of the Act.
[18] Regulated under article 21 of the Act.
[19] Regulated under article 22 of the Act.
[20] Regulated under Section 1 of Chapter IX of the Act.
[21] Regulated under article 26 of the Act.
[22] Regulated under article 27 of the Act.
[23] Regulated under articles 23 and 24 of the Act.
[24] Regulated under article 25 of the Act.
[25] Regulated under article 57 et of the Act.

 

 

This document does not purport to give legal, financial or tax advice. Should you require further information or legal assistance, please do not hesitate to contact [email protected]

The post The AI Act’s Impact on Businesses Operating Within the EU appeared first on Mamo TCV.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots