In the ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence’ (the “draft EU AI Act”), transparency is regulated by Article 13 and Article 52 thereof. The former applies to systems of Artificial Intelligence (“AI”) which are classified as high-risk and the latter applies to limited-risk AI systems.
As explained in the previous briefing of this series of insights on AI and investment funds, the use of AI in investment services will generally fall under limited-risk AI systems. The reason being that high-risk AI systems as defined under the draft EU AI Act do not include AI systems utilised in the asset management industry. Thus, in practice, only Article 52 could have ramifications for EU investment services.
However, a closer analysis of Article 52 shows that even Article 52 will largely leave the investment services industry outside the scope of the transparency obligation of the draft EU AI Act. The transparency obligation of Article 52 is principally applicable to three types of limited-risk AI systems as outlined in Articles 52(1), 52(2) and 52(3) of the draft EU AI Act.
Article 52(1) of the draft EU AI Act says that “Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.”1 With the exception of the fund’s administrator who collects KYC documents of investors who can be either legal persons or natural persons, the remaining aspects of the asset management industry are mostly based on contractual agreements between legal persons rather than natural persons. This renders Article 52(1) irrelevant for investment managers and depositaries whose contractual relationships are solely with the corporate vehicle such as the SICAV or the LLP structures.
Article 52(2) provides an obligation to be transparent when AI systems of biometric categorisation are used on natural persons. Accordingly, this only affects the fund’s administrator who is the only officer in the fund who interacts with the investors.
Article 52(3) of the draft EU AI Act is essentially irrelevant for financial services as it relates to an AI system which “manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.”2
The fourth paragraph of Article 52 provides that Title IV of the draft EU AI Act is not a derogation from Article 13 of this proposed regulation. Moreover, a recurring notion in sub-articles 52(1), 52(2) and 52(3) of the draft EU AI Act is that Article 52 is not applicable to “AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences.”3
The transparency obligation in Article 52 puts an emphasis on human interaction with AI because the explanatory memorandum of the proposed regulation explains that the purpose of the draft EU AI Act is to reach a balance between AI and citizens’ rights as enshrined in the Charter of Fundamental Rights of the EU.
The omission of any reference to legal persons in Article 52 also highlights the principle of proportionality. In fact, the explanatory memorandum provides an entire section on proportionality and says that “for other, non-high-risk AI systems, only very limited transparency obligations are imposed, for example in terms of the provision of information to flag the use of an AI system when interacting with humans.”4
The application of AI rules to financial services is mentioned in Recital 80 of the preamble of the draft EU AI Act which says that:
“[EU] laws on financial services include internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems….[T]he authorities responsible for the supervision and enforcement of the financial services legislation…should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems.”5
Directive 2014/91/EU of the European Parliament and of the Council of 23 July 2014 (“UCITS V”) which amends Directive 2009/65/EC on the coordination of laws, regulations and administrative provisions relating to UCITS (“UCITS Directive”) which regulates UCITS funds and Directive 2011/61/EU of the European Parliament and of the Council of 8 June 2011 on Alternative Investment Fund Managers (“AIFMD”) which regulates Alternative Investment Funds (“AIFs”) and AIF Managers (“AIFM”) are the principle frameworks in EU asset management law.
The transparency requirements of UCITS funds are more vigorous in light of the low-risk retail-oriented financial product. Transparency requirements for riskier AIFs are the bare-minimum disclosure obligations which cannot be diluted further whilst retaining passporting rights in the EU’s internal market. Consequently, any future proposed legislation on regulating AI in EU asset management would likely be drafted to meet the AIFMD transparency thresholds as a minimum requirement.
Chapter 4 of the AIFMD is entirely about transparency obligations and it provides obligations for the annual report (Article 22 AIFMD), disclosure to investors (Article 23 AIFMD) and reporting obligations to competent authorities (Article 24 AIFMD). These transparency obligations could be further expanded by EU Member States.
Taking the Maltese jurisdiction as an example, the Malta Financial Services Authority (“MFSA”) provides further transparency requirements in Appendix 13 of the Investments Services Rules for Investment Services Providers Part BIII: Standard Licence Conditions Applicable to Investment Services Licence Holders which Qualify as Alternative Investment Fund Managers.
Appendix 13 contains various transparency specifications. In Rule 3.03 of Appendix 13, the MFSA requires that the licence-holder “shall also inform investors of any changes with respect to custodian liability without delay”6, and in Rule 3.06 the MFSA requires disclosure on “the total amount of leverage employed.” 7 If one were to hypothetically apply these two transparency criteria to AI utilisation in the fund, the AIFM would need to disclose who is liable for AI utilisation and what percentage of the Net Asset Value is being managed by AI.
Although the transparency obligation of Article 52 of the draft EU AI Act and the transparency obligations of the AIFMD regime contain similarities, differences between the two frameworks do exist.
A particular legal contrast is that whilst the AIFMD entails ongoing disclosure, the draft EU AI Act necessitates the provision of information at the point of deployment. Ongoing communication and reporting to investors and regulators regarding the AIF is required by AIFMs under the AIFMD, whilst providers of limited-risk AI systems are required to provide information about the system when it is deployed. This difference stems from the fact that AIFs are continuously operating, whereas AI systems are generally developed and deployed once. In other words, the ultimate goal of transparency is reached through two different methods: the draft EU AI Act creates a one-time obligation whilst the AIFMD creates an ongoing obligation.
Perhaps the EU legislator limited the inclusion of investment services to the preamble of the draft EU AI Act to ensure that the principle of copulatio verborum indicat acceptionem in eodem sensu is not used out of context to create misunderstandings between the legal principles governing transparency in investment services versus AI in general.
Appropriately, the draft EU AI Act refrains from entering into specific industries to ensure that the transparency principles being set by the draft EU AI Act would not be subject to digressing interpretations generated by specific rules of specialised regulated industries within the EU’s internal market.