The Alan Turing Institute has published a new report: "AI in Financial Services", looking to the use of artificial intelligence in the FS sector. We will watch with interest how the report’s findings evolve into more specific regulatory rules or guidance.
As part of its ongoing public policy collaboration with the FCA, the Alan Turing Institute has just published a new report on “AI in Financial Services”, looking at the use of artificial intelligence in the financial services sector. Wide-ranging in scope, it explores a number of the recurrent themes in recent industry body, regulator and government-led reports into the development and use of AI, particularly around the guiding ethical principles that sit behind AI solutions and their deployment. So far so familiar then for those already working with AI in a financial services context. The report does, though, highlight some important principles that signpost the direction of travel that the FCA is likely to pursue in future supervisory (and potentially regulatory) regimes, reflecting the anticipated sector-led approach to AI oversight that the UK Government has so far indicated will form the basis of future AI regulation in the UK.
Here Alex Kirkhope, John Hartley and Sam Tyfield, partners at Shoosmiths, set out some key takeaways from the report that are likely to feed into firms’ (and their supply chain’s) AI governance and deployment strategies.
Technology and Outsourcing
From a technology and outsourcing perspective, customers and suppliers alike will be interested to see the report’s take on issues affecting the planning, design and oversight of AI solutions throughout their lifecycle, plugging into FS firms’ existing duties and accountability to protect consumers and avoid undue exclusion or discrimination in how products are offered to market. We anticipate these are likely to flow through not only to future regulatory frameworks but also to contracting models that sit around the deployment of AI technology in the FS sector. Below are some examples the report picks out on this theme:
- Data quality – in addition to the overarching question of compliance with data protection legislation, there is a need to understand and continually assess the sources and quality of data underpinning the AI, at the outset and through its development, but also on an ongoing basis to assess real world outputs. This would include continual analysis of the accuracy and completeness of the datasets involved, but also their ‘conceptual validity’ (does it measure what it’s meant to measure) and ‘representativeness’ (does it provide an adequate representation of real world scenarios for the purpose intended). If systems extend to use of more sensitive information like biometric data or facial recognition these risks will only be magnified.
- Supply chain impacts – the inherent reliance of AI solutions on external (often third party) data sources, the use of off-the-shelf tools or software not originally designed to support AI use cases, and the tendency for the delivery of complex or bespoke AI solutions to be outsourced all contribute to enhanced supply chain, governance and liability risks, particularly where AI solutions drive direct consumer outcomes. Firms will need to build this into their oversight models and ensure their contract terms adequately address roles, responsibilities and allocation of liability for failures through the supply chain. This is likely to involve even more stringent contractual levers to access and interrogate supplier systems and information than is the case in existing traditional material outsourcing arrangements, and arguably an adapted form of the typical ‘systems integrator / management’ role (whether delivered internally or by part of the third party AI supply chain) may be helpful in providing a clear and robust control mechanism for addressing overall supply chain governance and accountability.
- System Performance – the complexity of performance measurement regimes will only be enhanced in the context of AI deployments. That’s because performance is unlikely to be measured against a static solution. By their nature AI systems rely on dynamic inputs over time, and so their outputs are correspondingly prone to change. Having in place meaningful and adaptive performance regimes that don’t just measure the typical metrics of the technical performance of the system (availability / response times etc) but also their overall stability in delivering intended outcomes, will be crucial in allowing FS providers to demonstrate they have oversight of the system and (as importantly) can take action where that stability is not maintained.
- Human oversight – responsible use of AI will always involve a degree of human oversight, whether that’s at the design and development stage or in its ongoing operation. Building processes which ensure that human oversight and understanding is both possible and actively built into performance and governance processes will be a crucial element of AI system design
- Social and economic impacts – issues around the potential for unfair, unethical or simply discriminatory outcomes to arise from the design and use of AI are well rehearsed, but those impacts are magnified (or at least subject to greater scrutiny) when paired with existing regulatory concerns in the FS sector around fair and equal access to financial products and services, including the mooted ‘duty of care’ principles we explore below. Potential economic impacts cover everything from individual consumer product decisions to outcomes which affect overall market stability.
All of the above feed into the report’s recurrent theme of Transparency / Opacity, a shorthand for how well the design and operation of the AI system can be understood by internal and external stakeholders. This affects the full product lifecycle, from initial decision making around the design and deployment of solutions through to ongoing accountability for the outcomes they produce for consumers. Building these principles into governance regimes for the approval and operation of AI by FS firms is clearly desirable, but how easy it will be in practice for firms to translate relatively complex and dynamic concepts into operational policies and procedures remains to be seen.
AI and the Regulated Sector
Following the themes above, this is what many see as the challenges of incorporating AI into the regulated sector where there is very little room for error. Firms will also be keeping a close eye on any new regulation of AI itself and of course the current hot topic of Operational Resilience. The report highlights some of the key concerns as follows:
- Performance – how well an AI system performs is going to be integral to its deployment and trustworthiness. Understanding the performance of an AI system can also be difficult if provided by an outsourced 3rd party.
- Explanations – explaining how an AI system arrived at its decision is also a difficult topic to grapple with. With the advancements in AI the output will inevitably underpin a decision-making process and so being able to explain those decisions, often to affected third parties or even a regulator, will be crucial.
- Compliance – an issue that will be very high on any regulated firm’s list of priorities is ensuring that any tasks outsourced to AI are compliant with the current (and future) regulatory and legal framework. The EU is currently looking at a dedicated framework for regulating the use of AI and the UK will follow suit. As referenced above the impacts of AI can be significantly amplified – therefore the exposure to a compliance violation could potentially be equally significant.
- Responsiveness – responding to queries in a customer facing environment in the requisite time frame can be challenging and as with providing an explanation as above, this area needs to be user friendly. The report gives scenarios such as online chat bots and automated sales processes which may prevent customers from submitting their desired request or query.
- Accountability – with mechanisms in place such as the FCA’s Senior Manager & Certification Regime (SMCR) and the revised code of conduct around Operational Resilience there will need to be a specific individual who is accountable for the performance, compliance and responsiveness of any implemented AI system. We have done some work with a number of industry working groups run by JWG, the “RegTech intelligence provider and community”, to put together some job specifications and KPIs for senior managers who deal with AI/ML projects, products and services. The aim was to provide a framework for firms to identify which individual senior manager should oversee and bear responsibility for AI projects within the firm and to put our arms around the matters for what they should be responsible and how to measure success.
What about Duty of Care?
From a separate regulatory perspective; that of the nascent formal duty of care, the report touches on all the topics which are high on the FCA’s (and PRA’s) current supervisory agenda, namely culture, conduct, governance, data privacy, staff and customer behaviours, customer relationships, product development, promotion and sale. The same may be said of the recent FCA consultation on a new Duty of Care and the new rules on financial promotion and rules on operational resilience and the rules on outsourcing.
The potential “harms” of using AI to provide financial services and products identified by the report may be boiled down to one succinct notion: the use of models and data to define or identify one individual or group of individuals is anathema to the FCA’s priority that every consumer is treated as an individual and every individual’s desired outcomes, behavioural biases, informational asymmetry and negotiating position must demonstrably be taken into account.
Who within firms is responsible for this? The FCA believes that all senior managers should be responsible. This is confusing in the context of AI (which clearly plays a vital role in demonstrably fulfilling a firm’s duty of care to its customers).
Is the FCA telling the industry (and its advisors) now that all senior managers should bear responsibility for AI because the “duty of care” cuts across all business lines? Our view is that this might be the outcome and that this defeats the purpose of the SMCR – to promote a culture of individual responsibility – and does so perhaps intentionally without regard to the dichotomy between its demands in relation to a duty of care and AI. It is more than possible that the FCA regards “good conduct and culture” to be the responsibility of all staff and as encompassing the whole firm holistically. That word certainly has been regularly recently, by the judiciary and regulators.
However, that is not how firms are managed and that is true particularly of large firms which cannot be so managed.
On the flipside, the report identifies some benefits, including the ability to match customers to their needs and prevent misdirected marketing or mis-selling.
Given the FCA’s focus on firms’ duty of care, perhaps it would be better to consider the use of AI in these contexts as
(i) the starting point for firms to match customers to products/services and (ii) of use in the fulfilment of firms’ obligations to demonstrate compliance.
Yes, that is a benefit to firms but is hardly the benefit for which one may have hoped. In conclusion, it is safe to say that firms should be wary of using AI and before doing so, each senior manager should receive training on the (many) techniques available to understand the “how”, “what”, “why” and “where”.
What is next?
Whilst we can take some guidance from the European Commission’s approach to regulating AI, the roadmap in the UK is still to be determined. However of course the regulations around what the firms must and must not do in their sectors are well known – it is a question as to whether AI plays a part or not. Which takes us to the question of ‘what’s next’?
Two key issues that the report explicitly acknowledges it does not address are:
(i) how the principles it outlines can be converted into “the concrete forms that AI Transparency should take”, and (ii) “the possible need for changes to regulatory requirements or modifications to the risk and control frameworks used by firms”.
We will continue to watch with interest how the report’s findings evolve from here into more specific regulatory rules or guidance.
Ostmann, Florian; Dorobantu, Cosmina. (2021, June 11). AI in Financial Services. Zenodo.
|