The Bletchley Declaration - the enigma of global AI standards
by Shoosmiths LLP
Given the pace of change over recent months it seems a cliché to say we’ve had an eventful couple of weeks for AI, but just a week on from the UK's AI Safety Summit at Bletchley Park, which followed hot on the heels of a sweeping executive order on AI from the Biden administration, it is worth reflecting on where the events of the last 10 days or so leave us in terms of prospective regulation of AI.
Starting in the US , the White House issued its Executive Order (EO) for safe, secure, and trustworthy artificial intelligence on 30 October. Leveraging the vast resources of US federal executive bodies, it spans many of the key risks associated with advanced AI systems, from placing requirements on developers of powerful AI systems to share safety testing results with the US Government, through to initiatives for the development of standards, tools and tests for the development and deployment of AI systems. The scope of the order touches on everything from healthcare to privacy, education, employment and consumer rights.
The timing of the EO couldn't have been more pointed, coming as it did on the eve of the UK's AI Safety Summit, with which the UK government appeared to be positioning itself as a global hub for AI safety and standards. Whilst the wording of the EO referenced the gathering at Bletchley Park, that was only in the context of broader US international collaboration.
Which takes us to the outcomes of the UK summit. The 'Bletchley Declaration' issued at the conclusion of the summit sets out a comprehensive summary of the key risks, challenges and opportunities presented by AI, particularly so-called 'frontier AI' (meaning advanced AI models, beyond existing known capabilities, which pose particular ethical, technical or even physical risks). Whether the declaration says anything particularly new, however, is another question. There are though some specific principles worth picking out:
- international approach - the declaration includes an explicit acceptance that the essence of AI systems means their risks are "best addressed through international cooperation" - acknowledging the ultimately borderless nature of the technology and the limited benefit of countries taking an entirely independent approach;
- focus areas - while largely generic, the declaration picks out specific concerns around frontier applications impacting cybersecurity, biotechnology and misinformation, reflecting broader concerns around how AI technology may be used by 'bad actors' to cause harm to citizens or subvert civil and democratic processes;
- room for divergence - within the overall framework of international collaboration, it should be no surprise to observers of the UK government's approach to AI to date to see an overt acknowledgement that "approaches may differ based on national circumstances and applicable legal frameworks" - no doubt reflecting nervousness in many countries of following too closely the EU's detailed regulatory approach through its impending AI Act;
- reliance on tech providers - following on from the theme of divergence, the focus of the declaration remains largely dependent on cooperation, transparency and accountability from private actors, without primary fall-back on 'hard' regulatory controls and
- common standards - a stated intent to develop commonly accepted principles of transparency, standards and testing, as well as use of public sector capabilities to manage and monitor AI deployment.
Given the detailed initiatives coming out of the US, separate rules on AI recently developed by China, the EU’s AI Act, as well as recent suggestions that the G7 is intending to develop a separate code of conduct for advanced AI systems, it remains unclear where that leaves the UK’s ambitions to be a global leader on the oversight of AI. The baton will now be handed on, first to South Korea and then to France, to host the next two AI Safety Summits at 6 month intervals. It seems unlikely that the broader global regulatory landscape for AI will not have continued its rapid evolution by the time those meetings are concluded.