Biometrics and Emotion Analysis: facing up to the risks?
The ICO’s warning
Due to the range of personal data involved in emotion analysis, from facial movements to sentiment analysis, there is an increased risk of inaccuracy and systemic bias when deploying such technology, compared with traditional biometric data processed to verify an individual’s identity.
The technology involved in emotion analysis is currently perceived by the ICO as incomplete, with issues such as an inability to detect emotional cues resulting in an increased risk of inaccuracy.
Voice biomarker technology – is emotion analysis already becoming a new standard?
A particularly high-risk example of underdeveloped biometrics is voice biomarker technology, which proponents claim is capable of analysing speech patterns in order to identify health problems.
This technology is being used in hospitals and care homes to detect both mental health conditions such as anxiety and depression, and identify physical health conditions including asthma.
Whilst empirical support exists for the use of voice biomarkers as a screening tool, in 2021 the Lancet medical journal suggested that the clinical community would be unable to endorse the biomarker technology until data protection was at the heart of the development process.
The potential rewards of biometric AI
Consumer facing industries including the banking and the retail sectors have also been utilising emotion and other biometric AI as part of the consumer experience.
Retailers are exploring biometric technology, including behavioral tracking and facial and voice recognition, for advertising and promotional targeting. These systems can identify and track shoppers in brick-and-mortar stores and learn their preferences.
Biometric technology is well suited to the banking industry as it offers fraud prevention advantages for both the bank and end user. With the ease of counterfeiting passwords and rise in identity theft, voice biometrics enhances the authentication process and prevents information obtained from data breaches from being used.
The EU Artificial Intelligence Act (“the AI Act”) – Europe leading the way in regulation
Companies developing and deploying artificial intelligence (“AI”) technologies for the European market will soon be required to navigate an additional regulatory framework for AI.
The proposed AI Act would ban AI systems imposing an “unacceptable risk” and impose specific legal and governance obligations on organisations creating and deploying “high-risk” AI applications.
The Act is likely to impose transparency requirements on organisations implementing emotion recognition systems or biometric categorisation systems.
Regulatory enforcement powers are set to be equivalent to those under the GDPR, namely fines of up to €20m or 4% of global turnover.
The UK is also considering an AI legislative framework, although its plans are less well developed. The government aims to use existing regulatory frameworks within relevant sectors to deal with issues as they emerge, rather than putting the theory first. So regulation of specific applications of AI in medical settings might emerge from existing rules about medical devices, for example.
Biometrics enforcement around the world
Problematic use of facial recognition technology has already attracted much controversy in various jurisdictions. In May 2022, the ICO fined US company Clearview AI £7.5m for breach of UK data protection laws after it created a global database of twenty billion facial images from publicly available facial data scraped from the internet and social media platforms.
Greece and France followed a number of data protection authorities in fining Clearview in July and October 2022 respectively, in each case issuing the maximum permissible fine of €20m under the EU General Data Protection Regulation. These fines remain unpaid as yet.
Examples of new legislation to address emerging technologies abound. In China, new regulations promoting AI development recently came into force in Shanghai. China’s State Council Information Office announced that the local rules will “explore grading management and sandbox supervision and a flexible supervision system”, with the aim of stimulating the innovation capacity of various entities. The regulations also establish an expert committee on AI-related ethics.
In the US there is no country-wide regulation as yet but an increasing number of actions brought under new state laws. In October 2022 the Texas Attorney General’s Office filed a lawsuit against Google for the collection in the absence of consent of Texans’ biometric data. This included voiceprints and face geometry and contravened the Texas Capture or Use of Biometric Identifier Act.
However, in light of the FBI’s concerns about Chinese and Russian use of AI technology such as deepfakes for political ends, broad regulation of AI at a federal level may not be far away.
A joined up approach?
International consensus for regulation may be emerging. At the Global Privacy Assembly last month, which brings together data regulators from around the world, more than 120 countries set out principles for the use of facial recognition technology, including transparency and protection of human rights.
What businesses developing and using biometrics should think about now
- data protection impact assessments will be central to processing biometric data using AI
- take into account consent requirements and mechanisms for using biometric data
- assess the adequacy of technical and organisational security measures protecting biometric data
- consider how data protection principles such as fairness, transparency and data minimisation relate to the implementation of technologies using biometric data and machine learning
- check your privacy notices
- if you are proposing to use emotion analysis you are at high risk of breaching data protection rules
- if developing new technologies get advice on the possible impact of AI legislation in European and UK markets
Link to article