Shepherd and Wedderburn LLP
  May 9, 2019 - Scotland

Regulation of Online Falsehoods: ‘Fake News’ – the UK, Singapore and Europe

‘Fake news’ - information or news that is proven to be either verifiably false or misleading - has become a major, global concern.

As news and opinion pieces are increasingly pushed to readers via online and social media channels, the speed of their dissemination has accelerated exponentially, as have the challenges around regulating news and opinion.

These challenges, coupled with increasingly polarised electorates, have led to documented instances of manipulation of electoral processes, both here in the UK and in the United States, and to content being linked directly to the incitement of violence in countries such as India and Myanmar.

The demand for tougher regulatory intervention on fake news is compelling, whether such content is used as part of a deliberate attempt to influence people’s opinion (political or otherwise) or merely to increase profitability through reader clicks.

As a result, fake news has become a major threat to democracy and political stability and to user confidence in the wider internet eco-system. The rate at which false information can be spread globally is now a major challenge for governments and their regulatory frameworks, and regulators worldwide are scrambling to implement measures to address the problem. However, the size and reach of such online platforms pose a challenge to policymakers and regulators.

What is clear is that we are at the start of a long journey, and regulators will come under increasing pressure from government to move more quickly to address these issues. However, the major challenges lie in resolving the material issues around censorship; the levelling of regulation in an increasingly converged digital world; policing the process fairly; and protecting the regulatory process from unnecessarily political influence by using effective agencies that can secure the trust of all stakeholders in the process.

This article addresses the attempts by three prominent jurisdictions in this space – the United Kingdom (UK), Singapore, and the European Union (EU) – to tackle such issues and provide an outline for what happens next.

United Kingdom

A slew of recent examples have been widely reported in the UK media, underlining the impact how widely fake news or false information has spread, and the impact it has had. One prominent example was around the EU referendum, when online channels were used to serve up targeted, fake information designed specifically to undermine institutions and incite anti-EU sentiments. This was very similar to the allegation against Facebook with respect to its role in spreading fake news during Donald Trump’s successful US Presidential campaign.

What is being done?

While UK regulators like Ofcom regulate broadcast media and the Independent Press Standards Organisation (IPSO) and Impress regulate newspapers (online and offline), these regulators have not taken any significant action to tackle fake news.

TheCairncross Review Report, 2019investigated the UK news market and role of digital search engines etc. Amongst other things, it recommended that the online platforms’ efforts to improve their users’ news experience should be placed under regulatory supervision: the aim being to ensure oversight of the steps online platforms are taking to improve both the quality and the accuracy of the news content they serve. It also recommended that:

  • Ofcom should explore the market impact of BBC News, and whether it has inappropriately stepped into areas better served by commercial news providers;
  • a new independent institute be created to ensure the future provision of public interest news;
  • a new Innovation Fund be launched with the aim of improving the supply of public interest news: and
  • a media literacy strategy be developed alongside Ofcom, the media industries and stakeholders.

In addition to the above, Parliament’s Digital, Culture, Media and Sport (DCMS) Committee has also been investigating issues around fake news, particularly, what it is, its impact and the responsibilities of social media platforms and search engines in curbing it.

The Committee’s initial report, published in July, 2018, recommended the term ‘fake news’ should not be used and that a separate working group should be created to target issues around the spreading of misinformation. It is worth noting that as a result of this report, the term ‘fake news’ has now been banned from use in policy documents or official papers as it is “a poorly-defined and misleading term that conflates a variety of false information, from genuine error through to foreign interference in democratic processes”.

On 18 February 2019, theDCMS Committee issued its final report, calling for:

  • a compulsory Code of Ethics for tech companies, overseen by independent regulator – with clear legal liabilities to be established for tech companies if they do not act against harmful or illegal content on their sites;
  • the regulator to be given powers to launch legal action against companies breaching the Compulsory Code of Ethics;
  • government reform of the current electoral communications laws and rules on overseas involvement in UK elections; and
  • social media companies to be obliged to take down known sources of harmful content, including proven sources of disinformation.

The report also notes that Facebook had intentionally and knowingly violated both data privacy and anti-competition laws. It suggested that the Competition and Markets Authority (CMA) should investigate whether “Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”.

The UK Government has also established the National Security Communications Unit, which is tasked with “combating disinformation by state actors and others”.The decision to set up this unit came at the time an investigation was being conducted into Russia’s reported practice of setting up multiple social media accounts to spread falsehoods about the EU referendum.

The Digital Competition Expert Panel has also recently (March, 2019) published a report on‘Unlocking digital competition’, making recommendations for the changes required to the UK’s competition law framework in light of current economic challenges posed by digital markets. The report discusses the emergence of some dominant tech firms, which has raised concerns about higher prices for consumers, reduction in choice, impact on quality, impeding innovation and preventing new entrants. Its recommendations include, amongst other things:

  • establishing a new ‘digital markets unit’ tasked with giving people more control over their data by using open standards;
  • requiring large companies to share key data sets with start-ups (while safeguarding personal information); and
  • drawing up a code of conduct, which would set out acceptable behaviour for tech companies in their relationships with users.

Facebook has now launched a fact-checking initiative and employed Full Fact, a UK-based charity, to review stories, images and videos and rate them based on accuracy.Full Fact explainedthat users can flag up content they think may be false, and its team will rate the stories as true, false or a mixture of accurate and inaccurate content. It has also launched atoolkitto help users identify false information.

As is clear from the above, while there is growing public recognition that the big tech companies are failing in their obligation towards their users in dealing with the spread of misinformation and other illegal content, it remains to be seen whether new laws and regulations to combat the problem will be introduced by the current UK Government. It recently (April, 2019) published itsOnline Harms White Paper, setting out its plans for increasing online safety and proposals to establish in law a ‘new duty of care’ for large social media companies, which would hold them to account for tackling online harms, ranging from illegal activity to content that is deemed harmful but is not necessarily illegal. A consultation has been launched to gather views on these plans. This consultation closes on 1 July 2019.

Singapore

Singapore is at the forefront of developing legislation to curb the excesses presented around external online influence. According to a2018 online surveycarried out by Ipsos, a marketing research entity in Singapore, 8 in 10 Singaporeans (i.e. 79%) responded said that they were confident in their ability to distinguish between fake and accurate news. But when put to the test, around 91% mistakenly identified at least one out of five fake headlines as being genuine. In September 2018,the Select Committee on Deliberate Online Falsehoods offered 22 recommendationsaimed at helping to prevent the spread of “deliberate online falsehoods”, noting that there is no single “silver bullet” and that a “multi-pronged” approach would be necessary to tackle these issues. The Select committee’s report found “four overlapping dimensions of society” affected by online falsehoods:

  1. national security;
  2. democratic institutions and free speech;
  3. individuals; and
  4. businesses.

With elections looming in Singapore, it is likely the government will take steps to combat any risks from external influence.

Measures suggested by the committee included:

  • new legislation giving government the power to swiftly disrupt (limit or block publicity) the spread and influence of online falsehoods;
  • new powers to cut off funding from those who create and spread online falsehoods and funding from those behind the creation and dissemination;
  • introducing appropriate criminal sanctions for violations;
  • government regulation of tech companies, with powers to legally compel them to adopt appropriate measures;
  • public institutions provide timely information to people in response to misinformation; and
  • media organisations considering forming a coalition that debunks falsehoods swiftly and credibly.

In light of the above recommendations, the government has now tabled a new anti-fake news bill. The bill givespowersto authorities to force the correction of online content when there is a “false statement of fact” that jeopardises the “public interest”. Any minister could order the issuance of a “correction direction”. It also authorises the minister to order internet service providers to post statements indicating that content is false, or to disable access to certain content in Singapore. It would also be a crime to publish someone else’s personal information with the intention to harass, threaten or facilitate violence against them. According to Bloomberg,Facebook, Google and other tech giants have already raised concerns aboutthe new bill. Amongst others, the Asian Forum for Human Rights and Development and the global civil society alliance CIVICUS, have alsoraised serious concerns about the bill, including curbs on freedom of speech, silencing dissent against government and potential inconsistencies with human rights conventions.

It will be interesting to see how the government reacts to these criticisms, and what powers it ultimately grants under the legislation. The present trajectory suggests the imposition of a wide-ranging law that grants the authorities material discretion on interpretation.

European Union

The European Union (EU) has introduced a number of initiatives to tackle the spread of online disinformation and fake news. In 2017, the European Commission (EC) established a High-Level Expert Group (HLEG) on fake news, which published itsfirst report in March 2018. Disinformation, as defined in this report, includes all forms of false, inaccurate, or misleading information designed, presented and promoted intentionally to cause public harm or for profit.

The report made a number of important recommendations, including:

  • abandoning the term “fake news”, which was deemed inadequate in explaining the complexity of the situation and leads to confusion in the way researchers discuss these issues and how they are reported on in the media and discussed by policy-makers;
  • public authorities at all EU levels sharing data promptly and efficiently;
  • efforts to counter interference in elections;
  • a commitment by tech platforms to share data;
  • investment in media and information literacy and comprehensive evaluations of these efforts; and
  • cross-border research into the scale and impact of disinformation.

InApril 2018, the EC outlined an approach and self-regulatory tools to tackle online disinformation, including anEU-wide Code of Practiceagainst disinformation.

Last October, theCode of Practicewas signed by Facebook, Google, Twitter and Mozilla, trade associations representing online platforms, and the advertising industry. The purpose of the Code, which will be reviewed annually, is to identify actions that could be put in place to address the challenges relating to fake news.

The Code includes an annex identifying best practices that signatories will have to adopt to implement its commitments. These commitments include:

  • removing accounts and websites that consistently spread misinformation;
  • increasing the use of fact-checking organisations;
  • policing the placement of adverts; and
  • establishing a clear policy on the misuse of automated systems used to spread of false and misleading information, such as ‘bots’.

Also in 2018, the European Council invited the EC, in cooperation with member states, to produce an action plan with specific proposals to deal with the challenges around disinformation. In December 2018, the EC issued a Joint Communication/Action Plan against Disinformation, focusing on four key areas:

  1. Improved detection: appointing strategic communication task forces and reinforcing the EU hybrid fusion cell with significant additional specialised staff and data analysis tools.
  2. Coordinated response: a dedicated Rapid Alert System,to facilitate the sharing of data and assessments of disinformation campaigns and to provide alerts on disinformation threats in real time.
  3. Online platforms and industry: the signatories to the Code of Practice should swiftly and effectively implement the commitments made under the Code of Practice, focusing on actions that are urgent for the European elections in 2019. This includes, in particular, ensuring transparency of political advertising, stepping up efforts to close active fake accounts, labelling non-human interactions (messages spread automatically by ‘bots') and cooperating with fact-checkers and academic researchers to detect disinformation campaigns and make fact-checked content more visible and widespread.
  4. Raising awareness and empowering citizens through the promotion of media literacy.

Across Europe, different countries are taking action at a domestic level.

France enacted legislation in November 2018 to fight against the dissemination during an election period of false information on the internet. The legislation gives the relevant authorities the power to remove false content spread through social media and allows them to block the websites that publish such information. It is considered Western Europe’s first official attempt to ban false material. Authorities may also demand greater transparency on the financing behind sponsored content on websites in the three months prior to election periods. It is worth noting that the opposition party is appealing the law to the Constitutional Court on the basis that the measure is disproportionate.

In Germany, the Network Enforcement Act came into force on 1 January 2018, requiring social network providers, such as Facebook, Twitter and YouTube, to remove “manifestly” unlawful posts within 24 hours or risk fines of up to €50 million. The legislation on this has since been revised following criticism that too much content was being censored. The changes allow incorrectly deleted content to be restored, and require social media companies to set up independent bodies to review posts.

Conclusion

It is clear the problems caused by the spreading of online falsehoods are only set to increase, there will be growing calls for regulation in this area and action is undoubtedly going to be taken. However, critics of increased regulation suggest there are risks governments could use the law to suppress free speech and increase their ability to monitor their citizens.

In addition, there can be other pressures on governments in terms of legislating, given the size of many online platforms, the economic power they wield, and their ability to direct and withdraw inward investment. Like discerning misinformation itself, the choices around how to regulate and legislate are not easy, but measures will need to be taken. Critical will be balancing the need for fair and effective review and oversight of the discretionary powers required against the need for speed and effective intervention. While the models and the problems may be new, the legal challenges around due process and effective redress are not.