Online safety and data protection: where the two meet 

April, 2024 - Shoosmiths LLP

The Online Safety Act 2023 (the "OSA") became law on 26 October 2023 and will affect over 100,000 organisations. Here, we focus specifically on the overlaps between the OSA and data protection legislation, outlining the synergies (and differences) in key areas, together with some practical tips.

This firm has been a consistent source of commentary and practical guidance on the new Online Safety Act 2023 (the ‘OSA’). In previous articles, we have given an introduction and key implications, and outlined points organisations should do now to comply. Here, we focus specifically on the overlaps between the OSA and data protection legislation, outlining the synergies (and differences) in key areas, together with some practical tips.

Who’s in charge?

As the new online safety regulator, Ofcom has responsibility for publishing codes of practice and overseeing the OSA based on specific online safety duties of care. Penalties are similar to those that the Information Commissioner’s Office (‘ICO’) can enforce for breaches of data protection laws.

Although the regulators have different remits, overlaps in jurisdiction have arisen in areas such as when services are likely to be accessed by children (with the publication of the ICO’s Children’s Code in early September 2021), and the specific regulation of video-sharing platforms (with the ICO’s updated plan and approach published in January 2024).

Cooperation between the regulators is not as new as the OSA. For example, the Digital Regulation Cooperation Forum between all of the UK’s key compliance regulators — Ofcom, the ICO, the Competition and Markets Authority and the Financial Conduct Authority — has existed since 2020 to help them share ‘war stories’ and establish common expectations. However, now we see potential tensions between the regimes, particularly in areas such as content moderation, responses to many types of potential harm, and when considering specific user groups, like vulnerable adults and children.

To consider how to navigate these tensions, the starting point is the joint statement issued by Ofcom and the ICO in late November 2022. The regulators stated that they were working together to ensure coherence, asking organisations to focus on safety and privacy simultaneously. They were also clear as to the limitation of any defence for those who get it wrong: “There can be no space for services to argue that they could not comply with new online safety requirements, because of data protection rules, or vice versa.”

The regulators have committed to meeting the lofty objective of ensuring users feel safe online with their privacy protected, whilst ensuring providers are not caused undue burden and continue to innovate. Unfortunately, realising this has challenges in light of the complexity of the two regimes. A few examples help to demonstrate the point.

Synergies vs difficulties

Synergies between the regimes arise when one considers that online services which fail to protect users’ personal data could leave users vulnerable to being identified or targeted, having their location tracked, or being sent harmful communications, all of which are regulated by the OSA. Protecting against one risk, like personal data compromise, also works for others, like online safety.

However, in the process of meeting OSA obligations, services will need to collect more information about their users and their behaviours and handle it differently from previous data protection-compliant practices. Under the UK GDPR, services must only collect and process personal data which is proportionate and necessary, leading to a difficult balancing act of potential harms. The OSA often requires age assurance measures to enable, for example, organisations to accurately classify a user as a child. In meeting this requirement, organisations will collect more personal (possibly special category) data than previously, and they must ensure this data is held separately from data held for the purposes of other features of the service, for example, location or behavioural tracking. The temptation to make further use of the data for business purposes must be resisted.

Organisations will need to be strict in how they consider each risk at any one time, and how their processes acceptably perform what is required. Section 22(3) of the OSA states that: “When deciding on, and implementing, safety measures and policies, [there is] a duty to have particular regard to the importance of protecting users from a breach of any statutory provision or rule of law concerning privacy” (see also section 33(3)). In order to meet this requirement, a close understanding of the law will be necessary.

Analysis of the overlaps in the regimes at each stage of a user’s online journey may help organisations to identify the necessary action, particularly in the following areas.

Sign-in

Users are often asked to provide personal data to use an online service or parts of it. In fact, under section 64(1) of the OSA, the largest regulated organisations must offer adult users the option to verify their identities. Identification types may vary from simple personal data to special category data for some services, like biometric or health data.

Providers must take a closed view of the personal data they strictly require for a user to use a service, challenging themselves in respect of unnecessary data. The more data collected, the more data there will be to process compliantly.

Age verification

Other services have age restrictions for use and access. Most social media sites, for example, require users to be over 13. Where children are likely to access user-generated content, the OSA requires age verification measures to prevent them from encountering harmful content (section 12(4)). These measures must be “highly effective at correctly determining whether or not a particular user is a child” (section 12(6)).

It is initially for Ofcom through its codes of practice (expected in the first quarter of 2025) to provide examples of suitable age verification measures, and then for providers (and their technical advisers) to come up with ways of meeting the respective hurdles. These may involve collecting identification data for access as self-declarations are not sufficient (section 230(4) OSA), or to use facial recognition technology.

The data protection regime comes into play by governing how that identification data must be obtained. Under the UK GDPR, data must be accurate (Article 5(1)(d)), gathered using the least intrusive means (Article 5(1)(a) and (c)), stored only if necessary (Article 5(1)(e)) and have appropriate technical and organisational measures to protect them (Article 5(1)(f)). Collecting only the data that is necessary, holding it for no longer than necessary and ensuring it is sufficiently encrypted will help.

Organisations should only use personal data for unconnected purposes when the use is compatible with the existing purpose (Article 5(1)(b)). Again, acting on the temptation to use data for other purposes must be avoided: “You must ask yourself whether you are likely to use any special category information (including inferences) to influence or support your activities in any way” (per the ICO’s content moderation and data protection guidance impact assessment, discussed further below).

Content moderation

Under the OSA, service providers must meet broad duties of care, particularly regarding the prevention of users from encountering illegal content (for example, see sections 10(2) and (3) of the OSA) and to protect children’s online safety, including managing risks of harm to children (for example, section 12(2)) and preventing children from encountering content harmful to them (for example, section 12(3)). They may but are not (yet) required to use proactive technology to do so, such as that which tracks users’ content and behaviour (section 231).

Ofcom and the ICO have committed to updating existing guidance on content moderation “to reflect technological developments and Ofcom’s finalised online safety codes of practice” (ICO statement of 16 February 2024). This is because the ICO considers content moderation to result in a high risk to data subjects’ rights and freedoms. Pending the updated guidance, organisations have the ICO’s current guidance on content moderation and data protection published in February 2024 (the ‘Guidance’).
 
The risks of content moderation which the ICO has focused on in the Guidance primarily concern the potential for loss of personal data. The Guidance also includes that content moderation could unexpectedly and unfairly process or share personal data. The harms cited by the ICO are similar to those with which we are now familiar: distress or psychological harm, and direct financial loss (such as lost income if a person relies upon their content to generate income). But it also mentions the prospect of discrimination, which is an area, as yet, untouched by the senior courts in a data protection context, and on which little guidance is available.

The Guidance highlights that the definition of ‘personal data’ is broad and includes online identifiers such as IP addresses. Organisations must think widely when implementing content moderation measures, ensuring they are aware of the types of personal data being processed. As we know, all processing requires a lawful basis, and processing of special category data also requires a specific condition to be met (Article 9 UK GDPR).

Other frequently used processes

Technical and organisational measures must cover pseudo-anonymised data (if used). This occurs when, for example, organisations use an anonymous reference point (like a customer reference number) to monitor its customers. The measures an organisation must have in place include, for example, that the respective lists which pair reference numbers to customer names are kept separate.

Likewise, existing content moderation technologies rely partly on automation (and AI) and partly on human review of flagged issues (whether escalated by the automated system or a user complaint). The automated functions typically check content against a database of known prohibited content and/or seek to classify content against an organisation’s policies.

Automated content moderation or removal systems present their own problems. They must produce unbiased, consistent outputs. Organisations must determine if the automated systems “make decisions which have a legal or similarly significant effects on people” (Article 22 UK GDPR). If they do, organisations must ensure that an exception (for example, performance of a contract, safeguarding or consent (Article 22 UK GDPR)) applies if they rely on automated systems to make such decisions.

The familiar processing thresholds must also be met and organisations must carry out data protection impact assessments prior to processing to mitigate risks. These could account for matters like where a user volunteers or provides more regulated categories of personal data, such as special category data, without being asked or simply through their use of the service. Organisations should also assess matters such as content relating to criminal convictions or offences, all of which are afforded extra protection under the UK GDPR.

Services must think about the role of third parties. Many organisations, for example, hive out content moderation to others. If the purpose for processing the data changes from the original purpose, then processing may be unlawful or the entity acting as controller could switch (and so therefore would the entity which faces primary liability for non-compliance).

Sometimes, the regimes themselves require sharing of data with third parties, such as law enforcement agencies (see, for example, section 66 OSA). The ICO and Ofcom are consulting on what information such reports must include and how to ensure they are both OSA and data protection compliant.

Transparency, often linked to fairness, is another important principle in both data protection law and the OSA. Users must be told how processes use their personal information, what information is used, why and the effects. Certain services must include explanations in their terms of service about proactive technology used for complying with illegal content safety duties (for example, section 10(7) OSA).

Broader implications

OSA and data protection issues cannot be seen in isolation. Organisations will need processes in place to assist employees charged with human review of potentially harmful material to ensure that they are supported in doing so from a health and safety and employment perspective. Processes should be developed with data protection officers and with responsible OSA managers. Third party legal advice can provide invaluable input to these processes and mitigate potential liabilities.

Whilst the sweeping changes are to be enforced by a host of new offences and (possibly) rights for users to bring civil proceedings, analogies already arise with other recent themes in data law. Monitoring users’ activity without consent, for example, has already attracted many consumer claims pursuant to the regime regulating the use of cookies, and also include some show-stopper class actions such as Lloyd v Google ([2021] UKSC 50).

The courts have recently taken a narrow view when considering these matters, clearly establishing the boundaries between regulation and litigation. In Farley & Or v Paymaster ([2024] EWHC 383 (KB)) (“Paymaster”), the High Court held that a ‘near miss’, where compromised data was not seen by a third party, was not actionable unlawful processing. It was rather, the court held, a regulatory matter for the ICO.

As highlighted by Ofcom’s first direction to regulated services to update their terms of service to inform users of their right to claim in breach of contract (section 72(1) OSA), the OSA and data protection regimes rely on effective judicial remedies for data subjects. Where these claims will fit in the context of the recent approach of the courts, highlighted by Paymaster, is unknown and almost no guidance is provided by the regulators on the point.

Next steps

Regulated organisations can take some actions now, like updating their terms of service to reflect users’ rights to bring civil claims for breach of contract. However, the practical implementation of the OSA largely depends on Ofcom’s upcoming codes of practice.

Many organisations are consulting with advisers now to conduct broad reviews of existing products and services to see whether they could be regulated and, if so, how. As part of this, advisers need a firm grip not only on the new OSA but also existing data protection laws to prevent organisations meeting one hurdle but falling at the next.


A version of this article was first published by Matthew MacLachlan in the Privacy & Data Protection Journal, Volume 24, Issue 5, available here.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots