Following the submission of the first Draft of the Artificial Intelligence Act (”AI Act”) on April 21, 2021 by the European Commission for regulating artificial intelligence (AI) (we reported here), the European Parliament also published its final position on June 14, 2023. Thus, the next step now is the trilogue between the Parliament, the Council and the Commission. Since the European Parliament is to be re-elected in less than a year, the regulation is expected to come into effect quickly and with only minor changes. After successful parliamentary agreement, companies can now draw up more concrete plans and put their (planned) applications to test based on the new regulations. This article is intended to assist you in this regard. 1. Concept of AI Systems The scope of application of the AI Act is very broad and can cover numerous applications which you would not expect to be AI-related at first glance. This is more the case after the European Parliament had changed the definition of Artificial Intelligence Systems (AI Systems) once again. Nevertheless, not every software must meet the requirements of an AI System. This covers rather “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Thus, the European Parliament adopts the AI definition developed by the OECD in 2018. What is decisive is that an application autonomously develops recommendations such that decisions based on them can in some way be influenced by the AI. Examples for AI under this definition are varied and range from credit scoring systems to the assistants for visually impaired persons to autonomous driving systems. In the case of credit scoring systems, it is apparent that a sufficient indicator for the presence of AI is if a system outputs specific values based on probabilities. This can e. g. also be the case with applications that independently suggest texts or text blocks, specify or prioritize action steps or provide information on the probability of certain events. However, “Genuine” intelligence corresponding to human brain, is not required. To this extent, almost any software used in everyday work for assuming corporate tasks can fall under the AI Act. However, not every AI also implies a need for action. 2. Levels of risk Whether a company that uses an AI System must take action and implement comprehensive measures depends on which risk is associated with the use of the respective application. To this end, four levels of risk are defined in the AI Act that are distinguished based on the following criteria: a) AI with unacceptable risk Art. 5 AI Act defines practices in the area of Artificial Intelligence which are associated with such a high level of risk for persons, security and fundamental rights that they simply prove to be unacceptable for the legislature. Here the European Parliament recently made further improvement so that more AI Systems are prohibited than was the case in the previous draft. Accordingly, systems with the following functions are impermissible:
· techniques for subconscious influencing of persons in order to influence their behaviour which can result in psychological or physical harm to these or other persons; this does not include approved therapeutic purposes with the consent of the person;
· the exploitation of the weakness or vulnerability of persons, such as based on known or predicted personalty traits, social or economic situation, age or bodily or mental impairments such that it can result in psychological or physical harm to these or other persons. This also includes the algorithmic discrimination based on gender;
· biometric categorization based on sensitive or protected features or traits (e. g. gender, ethnic affliction, nationality, religion, political orientation); here too, there is an exception for therapeutic purposes;
· social scoring by authorities;
· biometric real-time remote identification systems (initially provided exceptions will be deleted) as wall asretrospective biometric evaluation of surveillance of public spaces;
· predictive policing
· creation of facial recognition databases via web-scraping, particularly from social media or surveillance cameras;
· emotion recognition systems in law enforcement, in border protection, at the workplace and in educational institutions.
b) AI with high level of risk (high-risk AI) High-risk AI may be used only if comprehensive requirements are met such as transparency obligations, approval for the European market as well as a risk and quality management system (see Clause 3 lit. a). Due to this high level of requirements, it is important to recognize high-risk AI and take appropriate measures. Otherwise, there is a risk of fines of up to EUR 20 million and/or up to 4 % of worldwide annual revenues, depending on which value is higher. Which applications are considered high-risk AI is specified in Annex III of the AI Act, which can be amended by the Commission in accordance with Art. 7 AI Act. It is therefore advisable to keep an eye on Annex III in the future. Currently, the applications with the following intended uses are considered high-risk AI:
· applicant selection;
· credit checks;
· access to studies and training, assessment of examination results in studies and training, for assessing the appropriate level of education as well as for detection of prohibited conduct during tests;
· assessment of persons in connection with law enforcement and similar measures restricting freedom;
· AI for supporting judges and authorities in application of the law;
· influencing elections and voting decisions (e. g. by controlling political campaigns);
· asylum and visa checks;
· biometric or biometric-based identification of natural persons that goes beyond the mere verification of personal information (e. g. admission control is not included);
· use of AI in the operation of critical infrastructures such as water, gas and electricity supply;
· access to basic public services;
· recommendation systems of very large online platforms (VLOPS – very large online platforms).
In addition, the classification as high-risk AI with the associated obligations under Art. 6(2) AI Act requires that the system poses significant risk for health, safety and fundamental rights of natural persons. When assessing the significant risk for fundamental rights, it is particularly important to pay attention to the use of AI e. g. threats to curtail the freedom of expression which can justify the classification as high-risk AI. In any case, AI applications for the medical diagnosis and treatment are usually included. c) AI with low risk In the case of the use of AI with low risk, certain transparency requirements must already be complied with. This concerns, for example, systems which interact with natural persons such as chatbots or deepfakes. d) AI with minimal or no risk The use of AI with minimal risk should, however, be possible without restriction. This includes e.g. search logarithms, spam filters or AI-supported video games. In addition, many AI systems used in productive companies fall under this category e. g. systems for predictive maintenance or industrial applications which do not process any personal data or make predictions with influence on natural persons. e) Generative AI For the first time, the current Draft also contains regulations expressly dealing with basic models on which so-called Generative AI is based, including e. g. ChatGPT and LaMDA language models (Art. 28b AI Act). The regulations apply above all to the providers of these systems and contain the fulfillment of transparency obligations, an obligation for registration, appropriate protection measures against unlawful content as well as measures for protection of the freedom of expression. Moreover, the generative systems must disclose if AI contents are generated. There are no exceptions for open-source AI in the area of basic models, even if it does not concern high-risk AI (thus the source code of LLaMA – the chatbot from Meta – is publicly visible). If they are used for the above-mentioned purposes, the mere use of generative AI such as ChatGPT can also be restricted or prohibited. In this case, the general rules of the AI Act apply additionally. For the area of generative AI a few changes are still to be expected for the trilogue negotiations, as this section was initially inserted by the European Parliament. 3. Obligations for companies a) High-risk AI In addition to the provisions that apply for AI with low or minimal risk, the users of high-risk AI have comprehensive obligations under Art. 29 AI Act to ensure the secure use of the systems:
· implementing appropriate technical and organizational measures for the use of systems in accordance with instructions for use;
· monitoring of the decisions and results by an appropriately qualified natural person;
· regular monitoring and adjustment of cyber security measures;
· making sure that only data are entered that are representative and comply with intended use of the high-risk AI systems;
· monitoring the operation of AI systems;
· informing the operator of the AI systems and the responsible authorities, if their use can pose a risk for the health and safety of persons in general, health and safety at the workplace, consumer protection, environment, public safety and other public interests or in case of a serious incident or malfunctions;
· consultation of employee representatives and employee information;
· if needed, data protection impact assessment according to Art. 35 GDPR and publication of a summary;
· if decisions are made regarding natural persons, the latter must be informed accordingly regarding the fact that they are the subject of the decisions of a high-risk AI;
· cooperation with the responsible national authorities;
· prior to the initial use, a risk-impact assessment must be carried out which contains the following elements: Purpose of the use of AI, planned geographic and spatial scope, categories of the natural persons concerned, assessment of the compatibility with legislation of the Union and the Member States regarding fundamental rights, possible impairment of fundamental rights, risk for particularly vulnerable groups, plan for avoidance of risks for fundamental rights, planned control systems.
|