Heuking
  July 5, 2023 - Germany

Artificial Intelligence: The European Parliament clears the way for the new EU Regulation (AI Act)
  by Hans Markus Wulf, Johannes Rolfs

Following the submission of the first Draft of the Artificial Intelligence Act (”AI Act”) on April 21, 2021 by the European Commission for regulating artificial intelligence (AI) (we reported here), the European Parliament also published its final position on June 14, 2023. Thus, the next step now is the trilogue between the Parliament, the Council and the Commission. Since the European Parliament is to be re-elected in less than a year, the regulation is expected to come into effect quickly and with only minor changes. After successful parliamentary agreement, companies can now draw up more concrete plans and put their (planned) applications to test based on the new regulations. This article is intended to assist you in this regard.

1. Concept of AI Systems 

The scope of application of the AI Act is very broad and can cover numerous applications which you would not expect to be AI-related at first glance. This is more the case after the European Parliament had changed the definition of Artificial Intelligence Systems (AI Systems) once again. Nevertheless, not every software must meet the requirements of an AI System. This covers rather “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Thus, the European Parliament adopts the AI definition developed by the OECD in 2018. 

What is decisive is that an application autonomously develops recommendations such that decisions based on them can in some way be influenced by the AI. Examples for AI under this definition are varied and range from credit scoring systems to the assistants for visually impaired persons to autonomous driving systems. In the case of credit scoring systems, it is apparent that a sufficient indicator for the presence of AI is if a system outputs specific values based on probabilities. This can e. g. also be the case with applications that independently suggest texts or text blocks, specify or prioritize action steps or provide information on the probability of certain events. However, “Genuine” intelligence corresponding to human brain, is not required. To this extent, almost any software used in everyday work for assuming corporate tasks can fall under the AI Act. However, not every AI also implies a need for action.

2. Levels of risk 

Whether a company that uses an AI System must take action and implement comprehensive measures depends on which risk is associated with the use of the respective application. To this end, four levels of risk are defined in the AI Act that are distinguished based on the following criteria:

a) AI with unacceptable risk

Art. 5 AI Act defines practices in the area of Artificial Intelligence which are associated with such a high level of risk for persons, security and fundamental rights that they simply prove to be unacceptable for the legislature. Here the European Parliament recently made further improvement so that more AI Systems are prohibited than was the case in the previous draft. Accordingly, systems with the following functions are impermissible:

·         techniques for subconscious influencing of persons in order to influence their behaviour which can result in psychological or physical harm to these or other persons; this does not include approved therapeutic purposes with the consent of the person;

·         the exploitation of the weakness or vulnerability of persons, such as based on known or predicted personalty traits, social or economic situation, age or bodily or mental impairments such that it can result in psychological or physical harm to these or other persons. This also includes the algorithmic discrimination based on gender;

·         biometric categorization based on sensitive or protected features or traits (e. g. gender, ethnic affliction, nationality, religion, political orientation); here too, there is an exception for therapeutic purposes;

·         social scoring by authorities;

·         biometric real-time remote identification systems (initially provided exceptions will be deleted) as wall asretrospective biometric evaluation of surveillance of public spaces;

·         predictive policing

·         creation of facial recognition databases via web-scraping, particularly from social media or surveillance cameras;

·         emotion recognition systems in law enforcement, in border protection, at the workplace and in educational institutions.

b) AI with high level of risk (high-risk AI)

High-risk AI may be used only if comprehensive requirements are met such as transparency obligations, approval for the European market as well as a risk and quality management system (see Clause 3 lit. a). Due to this high level of requirements, it is important to recognize high-risk AI and take appropriate measures. Otherwise, there is a risk of fines of up to EUR 20 million and/or up to 4 % of worldwide annual revenues, depending on which value is higher.

Which applications are considered high-risk AI is specified in Annex III of the AI Act, which can be amended by the Commission in accordance with Art. 7 AI Act. It is therefore advisable to keep an eye on Annex III in the future. Currently, the applications with the following intended uses are considered high-risk AI:

·         applicant selection;

·         credit checks;

·         access to studies and training, assessment of examination results in studies and training, for assessing the appropriate level of education as well as for detection of prohibited conduct during tests;

·         assessment of persons in connection with law enforcement and similar measures restricting freedom;

·         AI for supporting judges and authorities in application of the law;

·         influencing elections and voting decisions (e. g. by controlling political campaigns);

·         asylum and visa checks;

·         biometric or biometric-based identification of natural persons that goes beyond the mere verification of personal information (e. g. admission control is not included);

·         use of AI in the operation of critical infrastructures such as water, gas and electricity supply;

·         access to basic public services;

·         recommendation systems of very large online platforms (VLOPS – very large online platforms).

In addition, the classification as high-risk AI with the associated obligations under Art. 6(2) AI Act requires that the system poses significant risk for health, safety and fundamental rights of natural persons. When assessing the significant risk for fundamental rights, it is particularly important to pay attention to the use of AI e. g. threats to curtail the freedom of expression which can justify the classification as high-risk AI. In any case, AI applications for the medical diagnosis and treatment are usually included.

c) AI with low risk 

In the case of the use of AI with low risk, certain transparency requirements must already be complied with. This concerns, for example, systems which interact with natural persons such as chatbots or deepfakes.

d) AI with minimal or no risk

The use of AI with minimal risk should, however, be possible without restriction. This includes e.g. search logarithms, spam filters or AI-supported video games. In addition, many AI systems used in productive companies fall under this category e. g. systems for predictive maintenance or industrial applications which do not process any personal data or make predictions with influence on natural persons.

e) Generative AI

For the first time, the current Draft also contains regulations expressly dealing with basic models on which so-called Generative AI is based, including e. g. ChatGPT and LaMDA language models (Art. 28b AI Act). The regulations apply above all to the providers of these systems and contain the fulfillment of transparency obligations, an obligation for registration, appropriate protection measures against unlawful content as well as measures for protection of the freedom of expression. Moreover, the generative systems must disclose if AI contents are generated.

There are no exceptions for open-source AI in the area of basic models, even if it does not concern high-risk AI (thus the source code of LLaMA – the chatbot from Meta – is publicly visible). If they are used for the above-mentioned purposes, the mere use of generative AI such as ChatGPT can also be restricted or prohibited. In this case, the general rules of the AI Act apply additionally. For the area of generative AI a few changes are still to be expected for the trilogue negotiations, as this section was initially inserted by the European Parliament.

3. Obligations for companies

a) High-risk AI

In addition to the provisions that apply for AI with low or minimal risk, the users of high-risk AI have comprehensive obligations under Art. 29 AI Act to ensure the secure use of the systems:

·         implementing appropriate technical and organizational measures for the use of systems in accordance with instructions for use;

·         monitoring of the decisions and results by an appropriately qualified natural person;

·         regular monitoring and adjustment of cyber security measures;

·         making sure that only data are entered that are representative and comply with intended use of the high-risk AI systems;

·         monitoring the operation of AI systems;

·         informing the operator of the AI systems and the responsible authorities, if their use can pose a risk for the health and safety of persons in general, health and safety at the workplace, consumer protection, environment, public safety and other public interests or in case of a serious incident or malfunctions;

·         consultation of employee representatives and employee information;

·         if needed, data protection impact assessment according to Art. 35 GDPR and publication of a summary;

·         if decisions are made regarding natural persons, the latter must be informed accordingly regarding the fact that they are the subject of the decisions of a high-risk AI;

·         cooperation with the responsible national authorities;

·         prior to the initial use, a risk-impact assessment must be carried out which contains the following elements: Purpose of the use of AI, planned geographic and spatial scope, categories of the natural persons concerned, assessment of the compatibility with legislation of the Union and the Member States regarding fundamental rights, possible impairment of fundamental rights, risk for particularly vulnerable groups, plan for avoidance of risks for fundamental rights, planned control systems.

 

If the AI system is placed on the market or operated in the name of the company, the following obligations apply additionally:

·         setting up a risk management system as well as quality management system;

·         logging the operation of the high-risk AI;

·         provision of transparent information for the user;

·         ensuring supervision by a natural person;

·         provision of specifications for the input data or other relevant information regarding the data sets used, including their restriction and adoption, taking into account the intended use and the foreseeable and reasonably foreseeable abuse of the AI system;

·         creation of technical documentation of the high-risk AI system;

·         retention of logs automatically created by the high-risk AI system;

·         conformity assessment procedure before the AI system is placed on the market or put into operation;

·         registration of the high-risk AI;

·         attachment of CE marking as well as their name, address and contact information;

·         responding to inquiries from competent national authorities.

b) AI with low risk & AI with minimal risk 

According to Art. 4a AI Act, the minimum requirements which should in principle be complied with by all AI providers also apply to AI with low risk:

·         AI is to be developed so that it can be controlled and supervised by natural persons in an appropriate manner;

·         the risk of unintentional damages must be minimized when developing and using AI. This also applies for the resistance against (cyber) attacks and unlawful use;

·         informing the users regarding the fact that it concerns an AI, as well as regarding the risks associated with the use; traceability and explainability of results;

·         prevention of discrimination;

·         monitoring of the impact on individuals, society and democracy;

Compliance with these principles can also be ensured by complying with harmonized standards, technical specifications and codes of conduct in accordance with Art. 69 AI Act. These codes of conduct can be set up not only by the individual AI provider but by the interest groups as well.

In addition, the AI systems intended for interaction with natural persons must disclose that the user has been informed that AI is being used, unless this is already obvious. For AI with minimal or no risk, however, there are no other requirements, but it is recommended to draw up a code of conduct for this AI as well.

4.  Check steps

The following steps can be implemented in order to find out whether a use falls under the AI Act, and if so, which provisions apply:

 

Step 1 – Is this an AI system?

 

 

·         machine-supported system (in particular software);

·         autonomous development of results based on specified (including implicit) objectives (in particular the output of values based on probabilities);

·         due to the broad scope, it should be usually accepted in case of doubt.

If no: End of the check; no measures according to the AI Act are required.

If yes:

 

Step 2 – Is this an AI with unacceptable risk?

 


This is the case if the AI has one of the following functions:

·         subconscious influencing of the behaviour of persons;

·         the exploitation of persons’ weaknesses and vulnerabilities;

·         biometric categorization based on sensitive or protected features or traits (e. g. gender, ethnicity, nationality, religion, political orientation);

·         social scoring;

·         biometric real-time remote identification;

·         predictive policing;

·         creation of facial recognition databases via web-scraping;

·         emotion recognition systems, in particular at the workplace and in educational institutions;

·         retrospective biometric evaluation of surveillance of public spaces.

If yes: End of the check; the AI system may not be used.

If no:

 

Step 3 – Is this a high-risk AI?

 


This is the case if the AI has one of the following functions:

·         applicant selection; access to studies and training, assessment of examination results in studies and training, for assessing the appropriate level of education as well as for detection of prohibited conduct during tests;

·         credit checks;

·         asylum and visa checks;

·         supporting judges and authorities in application of the law, assessment in connection with law enforcement and similar measures restricting freedom;

·         influencing elections and voting decisions (e. g. by controlling political campaigns);

·         biometric identification; natural persons (not: mere identity check);

·         operation of critical infrastructures such as water, gas and electricity supply and transport;

·         recommendation system of very large online platforms (VLOPS – very large online platforms);

and

·         the system poses significant risk for health, safety and fundamental rights of natural persons.

If yes: High-risk AI, which must comply with specifications presented under Section 3 lit. a) and b) (see above).

If no: This is an AI with low or no risk. When needed, the specifications under Section 3 lit. b) must be complied with. If the subject is a chatbot or deepfake, it must be labelled accordingly.

5. Conclusion

It is possible that the above Draft to the AI Act, which has now been approved by the European Parliament, will largely correspond to the final version. Therefore, it is advisable to carry out an internal assessment of company applications now based on this Draft. The above checklist should be helpful in this regard. In this context, one should also take into account the other current legislative initiatives of the EU Commission relating to EU digital law such as e. g. the Digital Services Act or the new IT security provisions from the latest DORA regulation and the upcoming Cyber Resilience Act (see relating Article).