Hi ChatGPT, how do you feel about the upcoming EU AI Act? 

June, 2023 - Jan Clinck, ALTIUS/Tiberghien

On 14 June 2023, the European Parliament adopted its positionon the draft AI Act, which brings one step closer an EU regulation for generative AI and other AI systems. This blog gives the highlights of the current draft text, taking into account the newest amendments that have been proposed.

A (draft) AI Act?

In April 2021, the EU legislator published a first draft text of the so-called “AI Act”, which aims to create a set ofharmonised rules for the development, placing on the market, and use of AI in the European Unionto ensure AI systems are used in a safe, ethical, and trustworthy manner while protecting people’s rights.

Since then the text has been modified and debated extensively. In particular, ChatGPT’s recent mainstream introduction and the attention for such “generative purpose AI systems” has been a trigger to insert considerable clarifications and obligations for the providers of such AI systems.

As the AI Act will be a regulation, it will be binding in its entirety and directly applicable in all the Member States.

What are the main components of the draft AI Act?

Central to the AI Act is arisk-based approach. It is based upon 6 quality principles:

-human agency and oversight

-technical robustness and safety

-privacy and data governance

-transparency

-diversity, non-discrimination and fairness

-social and environmental well-being

These principles are then turned into specific obligations. Which obligations an entity should comply with will depend (a) of the nature of the entity and (b) the AI system concerned.

a.To whom does the draft AI Act apply?

The AI Act will apply in the first place toproviders placing AI systems on the EU market. As with the GDPR,being established in the EU is not a requirementto fall within the AI Act’s scope. So, US or Chinese companies may be subject to this legislation too. Furthermore, also manufacturers, authorised representatives, distributors and importers will have to meet certain obligations.

b.What are the different AI systems and obligations?

The following is a brief overview of the most important points for the different AI systems set out in the draft:

AI systems with unacceptable risks

There is ageneral prohibitionon putting such AI systems on the EU market as they are considered to contradict the Union values of respect for human dignity and fundamental rights.

-Social scoring (classifying people based on their social behaviour or personal characteristics);

-emotion recognition systems in law enforcement, the workplace or educational institutions;

-“real-time” remote biometric identification systems in publicly accessible spaces.

High-risk AI systems

This is the category to be most strictly regulated. It concerns AI systems posingsignificant harmto people’s health, safety, fundamental rights, or the environment.

The AI Act imposesvery broad requirementsin which AI systems must comply as well asfar-reaching obligationsfor providers if they would like to market such AI systems. It includes implementing a quality management system, conducting a conformity assessment (CE marking), conducting a“fundamental rights impact assessment”, data training and data governance and cybersecurity.

-AI systems used for the operation of critical infrastructure (e.g. the supply of electricity);

-Automatic recruitment tools;

-Creditworthiness scores;

-AI recommender systems used by social media platforms that are designated as very large online platforms under the Digital Services Act.

General purpose AI systems

These are AI systems designed to perform a wide range of tasks that broadly apply across different domains. They are capable of performing functions such as recognising and understanding images and speech, generating audio and video content, detecting patterns, answering questions, translating languages, and more.

This category has only recently been introduced and encompasses alsogenerative AI(e.g. ChatGPT) and “foundation models” (e.g. GPT 3.5). A tiered approach is proposed for these models, with a stricter regime for foundation models.

Obligationsinclude: conducting risk assessments, obligations relating to the design of the foundation model (including from an environmental impact perspective), registration of the foundation model in an EU database, notifying individuals that they are interacting with an AI system as well as indicating a summary of the training data available that is protected under copyright law.

-Chatbots;

-Translation tools;

-Virtual assistants.

All other AI systems

All AI systems require in principle to provide certain information to users to ensure transparency.

What about enforcement?

Enforcement will mainly be conducted by the national supervisory authorities. They will have extensive competences, including making fines (with the amounts concerned being similar to the GDPR). However, putting unacceptable AI systems on the market may be punished by either administrative fines of up to 40 000 000 EUR or, if the offender is a company, up to7 % of its total worldwide annual turnoverfor the preceding financial year, whichever is higher.

When do companies have to comply with the AI Act?

A final text is expected by the end of the year and will then apply two years later.

Finally, how did ChatGPT feel about this AI Act?

“As an AI language model, I don't have personal feelings or opinions”.

If you would like more information or to discuss the possible implications for your organisation, please do not hesitate to contact our team at Altius.

 

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots