The EU AI Act: A General Overview 

September, 2024 - Claude Micallef Grimaud and Nicole Bonett

 

This article is part of our EU AI Act series which explores the effect of the AI Act across various industries and sectors

Overview & Applicability Timeline

The Artificial Intelligence Act (“AI Act” or “Regulation”), officially Regulation (EU) 2024/1689, is a groundbreaking legislative framework designed to address the benefits and risks of AI technologies. Since AI is becoming ubiquitously integrated into various industries, the AI Act aims to ensure that these technologies are deployed safely, ethically, and transparently by establishing rules for AI Systems throughout the European Union (“EU”).

The AI Act entered into force across the EU, including Malta, on 1 August 2024. However, the AI Act’s provisions come into effect on different dates as follows:

2 August 2026 – Most of the provisions of the AI Act will apply after a two-year implementation period. These provisions include the chapters covering high-risk AI Systems, the transparency obligations for providers and deployers of certain AI Systems, the measures in support of innovation, the EU database for high-risk AI Systems, post-market monitoring, information sharing and market surveillance, codes of conduct and guidelines, delegation of power and committee procedure.

2 February 2025 – Chapters 1 and 2 of the AI Act comprising , respectively, the AI Act’s general provisions (Scope, Definitions etcetera) and the provisions on prohibited AI practices that present an unacceptable level of risk, will apply after a six (6) month implementation period.

2 August 2025 – The provisions of the AI Act addressing the establishment of notifying authorities and notified bodies by Member States, crucial for high-risk AI Systems, the chapters addressing general-purpose AI models, governance, penalties (except for fines for providers of general-purpose AI models) and Article 78 on confidentiality, will apply after a twelve (12) month implementation period.

2 August 2027 – The classification rules for high-risk AI Systems and the corresponding obligations will apply after a three-year implementation period.

Scope

The scope outlined in Article 2 of the AI Act is particularly noteworthy as it specifies the entities to which the Regulation applies. The AI Act covers an extremely broad range of actors involved in the AI value chain, both within and outside the EU, including:

  • Providers of AI Systems;
  • Deployers of AI Systems;
  • Providers and deployers from third countries, when the AI System’s output is used within the EU;
  • Importers and distributors of AI Systems;
  • Manufacturers of products incorporating AI Systems;
  • Authorised representatives of AI System providers; and
  • Affected individuals located within the EU.

An ‘AI System’ is defined (in Article 3 of the AI Act) as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

A Risk-Based Approach

The AI Act adopts a risk-based approach to AI regulation. Its primary focus is to ensure product safety and the uptake of trustworthy AI by categorising different levels of risks. Risk is evaluated based on the probability of harm occurring and the severity of that harm. In this context, harm can be either material or immaterial, covering a wide range of potential impacts.

The risk levels contemplated by the AI Act can be classified into four groups which respectively determine the level of regulatory scrutiny each group of AI Systems will face:

(1) Unacceptable risk: This category includes AI Systems that the AI Act explicitly bans due to their potential for causing unacceptable harm. AI Systems that pose a clear threat to human rights and safety are therefore prohibited. This includes AI for social scoring, real-time biometric surveillance in public spaces and systems designed to manipulate human behaviour or emotions in ways that violate ethical norms. All these AI Systems will effectively be banned across the EU as of 2 February 2025. However, some exceptions may be allowed for law enforcement purposes.

(2) High risk: AI Systems falling under this category pose a high level of risk, whether they serve as safety components in a product or function independently as standalone products. AI applications in critical areas like healthcare, law enforcement, employment and education are considered high risk. AI Systems such as AI-enabled medical devices, hiring algorithms and systems used in law enforcement need to adhere to strict regulatory requirements. These systems must undergo a third-party conformity assessment before being placed on the market or put into service. This assessment ensures compliance with laws listed in Annex I (such as the Medical Device Regulation and the Toy Safety Directive, among others) to verify that the AI System meets EU safety standards. Additionally, high-risk systems must adhere to a set of horizontal requirements for trustworthy AI and impose obligations on various stakeholders, including providers, users, distributors and importers. As stated above, the applicability of the chapter covering high-risk AI Systems will take effect from 2 August 2026. However, as previously stated, the section covering notifying authorities and notified bodies will become applicable from 2 August 2025.

(3) Limited risk: These AI Systems pose a lesser risk and do not significantly affect protected legal interests or decision-making processes, whether human or automated. Limited-risk systems generally fulfil one or more of the following conditions: (1) They are designed for narrow, specific tasks that pose minimal risk; and/or (2) They are intended to enhance a previously completed human activity. Examples of limited-risk systems include AI Systems like chatbots and emotion recognition systems. While not subjected to the same rigorous checks as high-risk systems, they must still comply with transparency obligations. For instance, users must be informed that they are interacting with AI, and AI-generated content must be clearly labeled as such. As stated above, the applicability of the provisions covering limited risk AI Systems will take effect from 2 August 2026.

(4) Minimal risk: AI Systems in this category pose no substantial risk and are free to be used without restrictions. Examples include spam filters and AI-based video or audio enhancement tools.

General Purpose AI (GPAI) models

The AI Act places particular attention on General Purpose AI (‘GPAI’) models because of their versatility and potential broader impact across industries. These are defined as AI models, including those trained on large datasets using self-supervision at scale, which display significant generality and are capable of performing a wide range of tasks. This definition applies regardless of how the AI model is marketed. However, this definition does not apply to models specifically intended for research, development or prototyping before their placement on the market.

The AI Act includes a specific regulatory framework for GPAI providers, recognising their unique role and the wide range of applications for which they might be adapted. GPAI refers to AI models capable of carrying out a wide range of tasks, and their regulation is one of the main objectives of the AI Act. Just like AI Systems, different requirements depending on the degree of risk apply to different GPAI models.  More rigorous requirements apply to some GPAI systems, mainly because of their large computing capacity or widespread societal influence. Such requirements include the need for reporting, technical documentation and ongoing assessments.

AI Systems That Are Out of Scope

Most AI Systems likely fall outside the scope of the AI Act, as it does not aim to address all AI-related concerns. The majority of AI Systems are unlikely to pose significant risks to health, safety or other critical areas, and may be governed by other legislation. The AI Act is designed to regulate only the most intrusive AI services.

The AI Act focuses not on the quantity of AI Systems but on the nature and quality of the services. The more invasive the service, the more likely it is to fall within the scope of the AI Act’s remit.

Penalties

The AI Act imposes significant penalties for non-compliance, with the most severe fines applying to violations of its rules regarding prohibited AI uses. Offenders could face an administrative fine of up to EUR 35 million or 7% of their global annual turnover, whichever is higher.

For breaches of other specified provisions, fines can reach EUR 15 million or 3% of worldwide turnover, whichever is higher.

Additionally, providing incorrect, incomplete, or misleading information to notified bodies or national authorities could result in fines of up to EUR 7.5 million or 1% of annual worldwide turnover, whichever is higher. Small and medium-sized enterprises (SMEs) and start-ups are subject to the same percentages or amounts, with the fines capped at the lower value.

There is also a penalty framework specific to providers of GPAI models under Article 101 of the Regulation. These providers could face penalties of up to EUR 15 million or 3% of annual worldwide turnover, whichever is higher, for intentionally or negligently violating the AI Act’s provisions. Penalties can also be imposed for failing to comply with requests for information or documentation, or for providing misleading data, as well as failing to grant the Commission access to the GPAI model for evaluation.

The AI Act also outlines the rights of individuals and organisations to submit complaints to market surveillance authorities, request explanations for automated decision-making and report non-compliance.

Concluding Remarks

The EU AI Act marks a pivotal moment in global AI governance, setting standards that could influence AI regulation elsewhere around the world. By classifying AI Systems based on risk, the AI Act strikes a balance between promoting innovation and ensuring that AI technologies are used responsibly. For industries and businesses operating within the AI space, understanding the obligations set forth by the AI Act will be crucial for both compliance and maintaining consumer trust.

At time of writing, no Maltese implementing measures have as yet been brought into effect.

This is the first in a series of articles exploring the effect of the AI Act across various sectors and industries. Please subscribe to our newsletter to receive important legal updates.

 

 

 

This document does not purport to give legal, financial or tax advice. Should you require further information or legal assistance, please do not hesitate to contact the authors and/or anyone from our Technology Law team at [email protected]

The post The EU AI Act: A General Overview appeared first on Mamo TCV.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots