Corporations and governments are ferreting out and squirreling away voluminous, detailed and private information about each and every one of us – and they are not afraid to use it. Should we be worried?
Singapore’s Health Minister recently revealed that confidential details of 14,200 HIV-positive people, stolen from a government public health database, had been leaked online by a disgruntled US citizen who had been in a romantic relationship with a local doctor.
The ministry has come under fire for the breach and for only belatedly informing those affected, having discovered the theft almost three years ago in 2016. Such sensitive information, in the wrong hands, might affect the individual’s employment, insurance and social standing.
This comes hard on the heels of “the most serious breach of personal data” in Singapore’s history, when hackers infiltrated SingHealth’s servers. The “deliberate, targeted and well-planned cyber-attack” on the nation’s largest group of public healthcare institutions reaped a haul of 1.5 million patient records, including those of Prime Minister Lee Hsien Loong.
Yet, these vastly visible hacks are merely the tip of the iceberg, highlighting just how vulnerable we are in an age where so much personal information is stored online. A disruptive technological force, Big Data often works stealthily. Do we really know how it is impacting us?
Facebook has over 2 billion monthly active users: people who trustingly fill in their birthdays, upload and tag their pictures, like and comment on news articles and click “Shop Now” when a sponsored link showing an item on sale pops onto the feed.
This behaviour applies across social media platforms, which harvest the information from users’ Internet activity to send them targeted advertising or posts. Click on an article about Brexit and similar content will show up on your feed. Blurb about your upcoming holiday to New York and sites recommending Broadway musicals appear on your browser. No, they’re not reading your mind; that’s Big Data mining what you read. And write. And share. And consume.
These examples are relatively benign – to the security-conscious user, they will be a source of irritation; to the online shopper, a boon of convenience.
Social media and search engines profit from mining our data, making money from advertisers eager to push out commercials and content customised for each user’s preferences. The higher probability that you will buy the recommended items in turn maximises advertisers’ revenue. That’s why Google lets you use its search engine for free and Facebook can promise that it will never charge you for the use of its services. If you are not paying, you become the product on sale.
But the uses of Big Data have morphed into something more sinister.
In a series of admissions between March and April 2018, Facebook revealed that a quiz called “thisisyourdigitallife” published on its platform was used to harvest the personal data from 87 million profiles without users’ consent.
Analysts say that consulting firm Cambridge Analytica bought this large trove of data as a tool to influence political outcomes such as the 2018 Mexican general election and the 2016 Brexit referendum. Voters were identified based on demographic data such as location, age and gender, which could be used to predict voting patterns. They were then fed tailored campaign messages in a bid to shape their voting behaviour.
The data analytics firm had also used data harvested for psychological profiling of US voters, creating a powerful database that reportedly helped carry Mr Trump to victory in the 2016 presidential election.
Armed with petabytes of historical user data, social media platforms are able to push out more content on the readers’ newsfeed based on their past preferences. This creates an echo chamber, giving users the often mistaken impression that many others in the cyberworld share their beliefs – and prejudices – and hence reinforcing those views. Troll farms based in Russia and Iran have reportedly been co-opting this mechanism, releasing millions of tweets and posts to influence voter opinion and their intention to vote, in order to disrupt elections.
Perhaps the most horrifying potential use of accumulated personal data is in a nation-wide (or, someday, global) social credit system.
China has been at the forefront of constructing such an apparatus in which its citizens are tracked through a wide network of surveillance tools. Wrongdoers get their scores docked for “bad behaviour” such as lighting up in non-smoking areas, sharing fake news or violating traffic laws. Do-gooders are rewarded for participating in “good activities” such as donating blood or volunteering for charity work.
The consequences of having a poor credit score can be frightening. These range from curtailed travel and reduced Internet speeds to bans from certain jobs and public shaming. The latter involves having your photograph, name and offence displayed on cinema screens and on billboard-size displays at airports and shopping malls.
If you default on your debt in some cities, friends and family who ring you may be redirected to a message calling you out as a trust-breaker wanted by the courts. By the end of 2018, Chinese courts had banned would-be travellers from buying flights 17.5 million times, and blacklisted citizens were prevented from buying train tickets 5.5 million times.
There are two immediate causes for concern.
First, governments can procure existing personal profiles from data mining companies. Such casual information – including Internet browsing history, sexuality or smoking habits – would have been disclosed naively and will now suddenly constitute official data, forming the basis for generating individuals’ default credit scores.
Second, such personal data is no less susceptible to data breaches and hacking. What is stopping a hacker from changing or faking the data to mess with the social credit scores of targeted individuals?
It may be too late to put the data genie back in the bottle, but we need to chain it.
Singapore has begun to take small steps to ensure that personal data is protected. For example, starting this September, businesses will no longer be allowed to collect National Registration Identity Card (NRIC) numbers or make copies of NRICs and passports.
But such measures are limited. Businesses need to be further regulated in how they use personal data and individuals should have the right to demand that any information they have provided to businesses and social media platforms be permanently wiped from databases.
Data collectors should also be put to a high standard of security – whether by imposing high monetary penalties or subjecting them to regular mandatory technology audits. The ability of governments to mine and make use of their citizens’ data should also be clearly prescribed, and the accuracy of such data ensured.
We’ve shared our Big Data freely with Big Brother. Time to let him know that it comes with strings attached.