Regulation of Online Falsehoods: ‘Fake News’ – the UK, Singapore and Europe
‘Fake news’ - information or news that is proven to be either verifiably false or misleading - has become a major, global concern.
As news and opinion pieces are increasingly pushed to readers via online and social media channels, the speed of their dissemination has accelerated exponentially, as have the challenges around regulating news and opinion.
These challenges, coupled with increasingly polarised electorates, have led to documented instances of manipulation of electoral processes, both here in the UK and in the United States, and to content being linked directly to the incitement of violence in countries such as India and Myanmar.
The demand for tougher regulatory intervention on fake news is compelling, whether such content is used as part of a deliberate attempt to influence people’s opinion (political or otherwise) or merely to increase profitability through reader clicks.
As a result, fake news has become a major threat to democracy and political stability and to user confidence in the wider internet eco-system. The rate at which false information can be spread globally is now a major challenge for governments and their regulatory frameworks, and regulators worldwide are scrambling to implement measures to address the problem. However, the size and reach of such online platforms pose a challenge to policymakers and regulators.
What is clear is that we are at the start of a long journey, and regulators will come under increasing pressure from government to move more quickly to address these issues. However, the major challenges lie in resolving the material issues around censorship; the levelling of regulation in an increasingly converged digital world; policing the process fairly; and protecting the regulatory process from unnecessarily political influence by using effective agencies that can secure the trust of all stakeholders in the process.
This article addresses the attempts by three prominent jurisdictions in this space – the United Kingdom (UK), Singapore, and the European Union (EU) – to tackle such issues and provide an outline for what happens next.
A slew of recent examples have been widely reported in the UK media, underlining the impact how widely fake news or false information has spread, and the impact it has had. One prominent example was around the EU referendum, when online channels were used to serve up targeted, fake information designed specifically to undermine institutions and incite anti-EU sentiments. This was very similar to the allegation against Facebook with respect to its role in spreading fake news during Donald Trump’s successful US Presidential campaign.
What is being done?
While UK regulators like Ofcom regulate broadcast media and the Independent Press Standards Organisation (IPSO) and Impress regulate newspapers (online and offline), these regulators have not taken any significant action to tackle fake news.
TheCairncross Review Report, 2019investigated the UK news market and role of digital search engines etc. Amongst other things, it recommended that the online platforms’ efforts to improve their users’ news experience should be placed under regulatory supervision: the aim being to ensure oversight of the steps online platforms are taking to improve both the quality and the accuracy of the news content they serve. It also recommended that:
In addition to the above, Parliament’s Digital, Culture, Media and Sport (DCMS) Committee has also been investigating issues around fake news, particularly, what it is, its impact and the responsibilities of social media platforms and search engines in curbing it.
The Committee’s initial report, published in July, 2018, recommended the term ‘fake news’ should not be used and that a separate working group should be created to target issues around the spreading of misinformation. It is worth noting that as a result of this report, the term ‘fake news’ has now been banned from use in policy documents or official papers as it is “a poorly-defined and misleading term that conflates a variety of false information, from genuine error through to foreign interference in democratic processes”.
On 18 February 2019, theDCMS Committee issued its final report, calling for:
The report also notes that Facebook had intentionally and knowingly violated both data privacy and anti-competition laws. It suggested that the Competition and Markets Authority (CMA) should investigate whether “Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.
The UK Government has also established the National Security Communications Unit, which is tasked with “combating disinformation by state actors and others”.The decision to set up this unit came at the time an investigation was being conducted into Russia’s reported practice of setting up multiple social media accounts to spread falsehoods about the EU referendum.
The Digital Competition Expert Panel has also recently (March, 2019) published a report on‘Unlocking digital competition’, making recommendations for the changes required to the UK’s competition law framework in light of current economic challenges posed by digital markets. The report discusses the emergence of some dominant tech firms, which has raised concerns about higher prices for consumers, reduction in choice, impact on quality, impeding innovation and preventing new entrants. Its recommendations include, amongst other things:
Facebook has now launched a fact-checking initiative and employed Full Fact, a UK-based charity, to review stories, images and videos and rate them based on accuracy.Full Fact explainedthat users can flag up content they think may be false, and its team will rate the stories as true, false or a mixture of accurate and inaccurate content. It has also launched atoolkitto help users identify false information.
As is clear from the above, while there is growing public recognition that the big tech companies are failing in their obligation towards their users in dealing with the spread of misinformation and other illegal content, it remains to be seen whether new laws and regulations to combat the problem will be introduced by the current UK Government. It recently (April, 2019) published itsOnline Harms White Paper, setting out its plans for increasing online safety and proposals to establish in law a ‘new duty of care’ for large social media companies, which would hold them to account for tackling online harms, ranging from illegal activity to content that is deemed harmful but is not necessarily illegal. A consultation has been launched to gather views on these plans. This consultation closes on 1 July 2019.
Singapore is at the forefront of developing legislation to curb the excesses presented around external online influence. According to a2018 online surveycarried out by Ipsos, a marketing research entity in Singapore, 8 in 10 Singaporeans (i.e. 79%) responded said that they were confident in their ability to distinguish between fake and accurate news. But when put to the test, around 91% mistakenly identified at least one out of five fake headlines as being genuine. In September 2018,the Select Committee on Deliberate Online Falsehoods offered 22 recommendationsaimed at helping to prevent the spread of “deliberate online falsehoods”, noting that there is no single “silver bullet” and that a “multi-pronged” approach would be necessary to tackle these issues. The Select committee’s report found “four overlapping dimensions of society” affected by online falsehoods:
With elections looming in Singapore, it is likely the government will take steps to combat any risks from external influence.
Measures suggested by the committee included:
In light of the above recommendations, the government has now tabled a new anti-fake news bill. The bill givespowersto authorities to force the correction of online content when there is a “false statement of fact” that jeopardises the “public interest”. Any minister could order the issuance of a “correction direction”. It also authorises the minister to order internet service providers to post statements indicating that content is false, or to disable access to certain content in Singapore. It would also be a crime to publish someone else’s personal information with the intention to harass, threaten or facilitate violence against them. According to Bloomberg,Facebook, Google and other tech giants have already raised concerns aboutthe new bill. Amongst others, the Asian Forum for Human Rights and Development and the global civil society alliance CIVICUS, have alsoraised serious concerns about the bill, including curbs on freedom of speech, silencing dissent against government and potential inconsistencies with human rights conventions.
It will be interesting to see how the government reacts to these criticisms, and what powers it ultimately grants under the legislation. The present trajectory suggests the imposition of a wide-ranging law that grants the authorities material discretion on interpretation.
The European Union (EU) has introduced a number of initiatives to tackle the spread of online disinformation and fake news. In 2017, the European Commission (EC) established a High-Level Expert Group (HLEG) on fake news, which published itsfirst report in March 2018. Disinformation, as defined in this report, includes all forms of false, inaccurate, or misleading information designed, presented and promoted intentionally to cause public harm or for profit.
The report made a number of important recommendations, including:
InApril 2018, the EC outlined an approach and self-regulatory tools to tackle online disinformation, including anEU-wide Code of Practiceagainst disinformation.
Last October, theCode of Practicewas signed by Facebook, Google, Twitter and Mozilla, trade associations representing online platforms, and the advertising industry. The purpose of the Code, which will be reviewed annually, is to identify actions that could be put in place to address the challenges relating to fake news.
The Code includes an annex identifying best practices that signatories will have to adopt to implement its commitments. These commitments include:
Also in 2018, the European Council invited the EC, in cooperation with member states, to produce an action plan with specific proposals to deal with the challenges around disinformation. In December 2018, the EC issued a Joint Communication/Action Plan against Disinformation, focusing on four key areas:
Across Europe, different countries are taking action at a domestic level.
France enacted legislation in November 2018 to fight against the dissemination during an election period of false information on the internet. The legislation gives the relevant authorities the power to remove false content spread through social media and allows them to block the websites that publish such information. It is considered Western Europe’s first official attempt to ban false material. Authorities may also demand greater transparency on the financing behind sponsored content on websites in the three months prior to election periods. It is worth noting that the opposition party is appealing the law to the Constitutional Court on the basis that the measure is disproportionate.
In Germany, the Network Enforcement Act came into force on 1 January 2018, requiring social network providers, such as Facebook, Twitter and YouTube, to remove “manifestly” unlawful posts within 24 hours or risk fines of up to €50 million. The legislation on this has since been revised following criticism that too much content was being censored. The changes allow incorrectly deleted content to be restored, and require social media companies to set up independent bodies to review posts.
It is clear the problems caused by the spreading of online falsehoods are only set to increase, there will be growing calls for regulation in this area and action is undoubtedly going to be taken. However, critics of increased regulation suggest there are risks governments could use the law to suppress free speech and increase their ability to monitor their citizens.
In addition, there can be other pressures on governments in terms of legislating, given the size of many online platforms, the economic power they wield, and their ability to direct and withdraw inward investment. Like discerning misinformation itself, the choices around how to regulate and legislate are not easy, but measures will need to be taken. Critical will be balancing the need for fair and effective review and oversight of the discretionary powers required against the need for speed and effective intervention. While the models and the problems may be new, the legal challenges around due process and effective redress are not.
- What to Watch Out For in Case of a Hard Brexit and ZUVIZK
- Fake Meat Good, Fake News Bad
- Five Easy Steps to POPI Compliance
- New DOJ Corporate Compliance Guidance Gives Corporations Insight into Evaluating Their Programs
WSG Member: Please login to add your comment.