How To Build A Virtual Marketplace on Facebook?

What was the Facebook Scandal Really All About?

You don’t have to be a Facebook user to know about the data protection scandal that finally broke in 2018; as the single biggest data breach in Facebook history, it certainly made the news.

The sheer amount of data held digitally around the world makes data more attractive and vulnerable to hackers than ever, and the size of Facebook’s user base meant that any weaknesses in its security systems were a disaster waiting to happen.

But what went wrong? What’s been done to prevent a repeat, and how can social media users protect their data?

H2: How Did It Happen?

So… What happened to Cambridge Analytica? To put it simply, users gave consent for their data to be used, without realising how it was going to be used. Existing data protection legislation required that users had to actually opt out to deny Facebook or any other app or organisation permission to access their data; this would often take the form of a check box that required users to check it to opt out of having their data shared with interested third parties.

This is a key feature of Facebook’s business model: the platform is free, so what’s the product? Actually, it’s you. Facebook have the right to sell any information that you consent to give them, such as your age, location, your contacts, workplace and gender.

How To Build A Virtual Marketplace on Facebook?

People don’t tend to pay too much heed to check boxes denying permissions, and Facebook knows this, so it’s always been adept at making the opt-out seem difficult or unimportant. They also know that people tend to over share on social media.

With enough data, it’s very easy to paint a picture of an individual: their spending habits; their interests; their socio-economic status, and their political leanings.

It’s the latter that was of interest to the SCL Group, an organisation specialising in behavioural analytics. In 2013, an offshoot of this organisation called Cambridge Analytica employed a Cambridge academic, Dr Alexandr Kogan, to develop an app. This app, My Digital Life, launched a paid survey on Facebook. 

So far, so ethical. However, the app also used not so informed consent to get users to agree to sharing their personal details. Not only did Cambridge Analytica access this data, but Facebook allowed them to access the personal data of all the participants’ Facebook friends.

This created a massive web of data mining, with Cambridge Analytica obtaining the details of some 87 million Facebook users. 

The Media Fallout

This happened in 2014, but the news only started to break in 2017 when an article in the Observer penned by Carole Cadwalladr quoted claims made by Christopher Wylie, an ex-Cambridge Analytica employee. The news fully broke on the 17th March 2018 with an article in the Guardian newspaper.

The cat was finally out the bag, but far too late for damage limitation. Political lobbies had already used the ill-gotten data to analyse users’ political leanings and manipulate their newsfeeds, very possibly influencing both the 2016 US presidential election and the UK Brexit vote.

In 2019, Facebook was slapped with a record-breaking $5 billion dollar fine in the US by the Federal Trade Commission. However, this was considered inadequate by many critics, and probably with good reason: Facebook had a global turnover of $56 billion in that year alone.

In the UK, the 2018 fine was even smaller: a mere £500,000, the maximum that could be handed out by the Information Commissioner under the existing Data Protection Act. For a company the size of Facebook, it was no deterrent.

The Rise of the GDPR

However, in Europe, the wheels were already in motion to give consumers better protection when it comes to their personal data; the 1995 European Data Protection Directive was superseded by the General Data Protection Regulations (GDPR), which came into force in May 2018.

Much of the legislation was copied from the previous directive: for instance, data still needs to be kept accurate and up to date; it needs to be minimal; it needs to be secure; its use needs to be transparent; it needs to be kept only as long as necessary; it needs to be used only for the stated purposes, and the data processor will be held accountable for its use.

The major changes are twofold:

Firstly, the fine for a data breach can be anything up to €20 million Euros, or up to 4 percent of global turnover in the case of businesses, whichever is the larger.

Secondly, data processors have to seek the explicit permission of the data subject to obtain their data; a customer or user failing to check a box that denies permission is not good enough.

A New Leaf for Facebook?

So has this made any difference to Facebook and how they operate? Well, some, but possibly not enough. It is true that they agreed to apply the principles of the GDPR to all their operations, even those outside the EU, but some things haven’t changed.

Even as late as April 2018, following the Cambridge Analytica scandal, Facebook was still making informed consent an issue.

To set up an account, Facebook seeks freely-given consent for them to seek users’ data, but the process railroads users in the direction of giving consent with its format; ‘accept’ buttons are more prominent than those used to decline permission, and it’s difficult to go back to amend permissions.

Not only are they using sharp practices, but their security is questionable, too. In 2019, there was another data breach where 419 million users had their phone data compromised by databases that were insufficiently password-protected.

Facebook may be fun, but  it’s in the business of making money out of its users’ data. It obviously regards fines for data breaches as an occupational hazard, and it has the size to shrug them off. It’s down to you to decide what you want to happen to your data.