Protecting children from harmful content online – a global perspective
Recent years have seen concerted action aimed at controlling the collection and use of children’s personal data online. Well known laws such as GDPR in Europe and the UK, and COPPA in the US, have meant that we are saying goodbye to the bad old days when mainstream platforms would share children’s contact details freely with third parties.
But what has happened to attempts to regulate what children can view online and on their devices?
UK - The Online Safety Bill
This proposed law, hugely popular with many, is currently being taken forward in the UK Parliament. It seeks to introduce new rules for organisations which host user-generated content, such as social media platforms, search engines, online forums, and some online games. As well as being responsible for removing illegal material such as child sexual exploitation and terrorism, regulated services must also assess the risks of children accessing legal but harmful content such as suicide and self-harm, and design services to prevent such access. Platforms that fail to comply once the bill is implemented could face fines of up to 10% of their revenue from the Information Commissioner and risk being blocked.
US – The Kids Online Safety Act
In 2022, US Congress introduced the Kids Online Safety Act which aims to impose new measures and requirements for children under the age of 17. In the Senate, the latest draft was published in early December and focuses on limiting the activities and negative content that children are exposed to online. It has to cover more ground than the UK and EU legislation (such as the control of geolocation data) as the US does not have comprehensive federal privacy laws as a backdrop. The central proposition is that in-scope providers “shall act in the best interests of a user that the platform knows or should know is a minor”. If passed, enforcement will be through the Federal Trade Commission or State Attorneys General.
EU – The Digital Services Act
Unlike the Online Safety Bill, and proposed US legislation, this is already law. The Digital Services Act will require very large online platforms and search engines to “take measures to protect minors from content that may impair their physical, mental or moral development and provide tools that enable conditional access to such information”. In addition, very large online platforms and search engines must take “targeted measures to protect the rights of the child, including […] tools aimed at helping minors signal abuse or obtain support”. It starts applying in practice from February 2024, with providers of online platforms having to start publishing user numbers a year before that. Duties are more vaguely defined than in the UK, but fines are still substantial: up to 6% of global turnover.
So why hasn’t this happened already?
Given the unquestionable harm which has been caused by children’s exposure to unfiltered online content over at least a couple of decades, why is it taking so long for laws to follow?
Partly this has to do with the difficulty in practice of establishing the real age of internet users: without disclosure of a user’s personal data in ways which may do more harm than good.
These developments also raise other difficulties, which are common to efforts to regulate adult content. Most obviously there is the ever-contentious balancing act between personal freedom and state control. In addition, the moderation of content is expensive, and smaller platforms are likely to find it harder to conform to rules. Some fear that this will result in further consolidation of regulated players: exactly the outcome that parallel pro-competition legislation such as the EU Digital Markets Act is designed to prevent.
Thirdly, the earlier platforms have to catch offending content, the higher the risk of a challenge to end-to-end encryption, which is seen as the cornerstone of internet freedom of speech. The fear is that once “backdoors” into private content are established, to enable platforms to comply with new rules, then these same backdoors are potentially available to governments and/or bad actors.
Finally, it can put disproportionate power into the hands of regulators, usually not democratically elected, to make judgement calls on the harmfulness of content.
Given the concerns raised by voters everywhere, it’s likely that attention will increasingly turn to online content, and laws like these will become an ever more common feature of the online landscape.
Link to article