With great power comes great regulation

By Areeq Chowdhury.

The Cambridge Analytica debacle appears to be the straw that will break the camel’s back. The idea of data being unwittingly taken from users for the purpose of targeted advertising is nothing new, but the idea that it’s been done to contribute to the victory of Donald Trump, or perhaps Brexit, has riled up the establishment. The allegedly lax attitude of social media giants towards data protection has now led to serious conversations about regulation and accountability.

Having come under fire for corporate tax avoidance, platforming extremist content, and algorithms that boost the spread of disinformation, the likes of Facebook, Twitter, and Google are set for a bout of government regulation as politicians and the public catch-up to what has been going on in Silicon Valley. So what has been happening, and is it time for regulation here in the UK and elsewhere across the globe?

In exchange for the ‘free’ use of social media networks, consumers accept advertising from third parties on the platform, whether it’s Facebook, Twitter, YouTube, or Google. Instead of obtaining financial revenue directly from users, the platforms make money by selling advertising space. Unlike traditional outlets, some of these social media platforms have rich data, willingly and publicly provided by users, that can be used to perfectly target products or campaigns. The controversy is about the protections and permissions over that data, who is using it, and how it’s being used.

Image result for zuckerberg us congress
Mark Zuckerberg provided testimony to the US Congress following concerns and allegations around data misuse by Cambridge Analytica.

With the recent high-profile case of Cambridge Analytica, an academic, who created a personality quiz app used by 800,000 users, sold on the data he collected with permission to a third party (Cambridge Analytica). This quiz app, also collected the public information of the 800,000 users’ friends which is said to total 87 million accounts. It is alleged that this information was used by Cambridge Analytica to create tools that could target messages at voters in the 2016 US Presidential election that would appeal to specific emotions to encourage them to vote, or not vote, for a candidate. The data originally provided for a quiz, therefore, would have been used to elect candidates in an election.

Facebook has come under particular scrutiny, with Mark Zuckerberg recently testifying in front of the US Congress. They have committed to putting in place new controls to prevent a breach of trust like this happening again, are funding independent research, and are backing the implementation of the ‘right regulation’. There are other existing platforms that have similar problems, and there will be future platforms that will come across the same challenges. Should it be left purely to the social media industry to self-regulate or should there be more robust state regulation, not just within a particular nation, but globally?

Extremist content online has been another area of contention for social media platforms. Last year, YouTube lost millions of dollars in revenue as major brands pulled advertising from the platform once it was revealed their adverts were appearing alongside terrorist content. Similarly, Facebook, Twitter, WhatsApp, and Telegram face similar threats as extremist actors use these platforms to recruit and organise online. Of course, this is a huge challenge. The amount of content posted online is infinite, and there is little agreement philosophically on where the lines between hate speech and free speech are drawn.

Should this be policed by the platforms themselves, or should governments play a greater role to ensure that companies keep their platforms free from dangerous activity? As these platforms operate globally across borders, can the actions of a single state make a difference, or does there need to be a global effort? Are the free market forces alone, or the idea of ‘money talks’, enough to regulate these platforms, or is the withdrawal of revenue too reactionary?

One area that hits the hardest with the public is the idea that social media giants are not paying their fair share of taxes. For example, in 2014, Facebook paid a total of £4,237 of corporation tax despite paying out £35 million staff bonuses. Whilst this was legally compliant due to an accounting loss they had made, this relatively miniscule amount enraged politicians and the public. Similar stories often appear related to companies such as Google and Uber. This, again, is another area which is difficult to regulate, because unlike traditional businesses, it is difficult to determine where revenue is being made when it is a company operating over the internet.

On the back of these concerns, and many others, it does appear that the tide is turning against self-regulation with a general consensus being formed in the UK and abroad that it is insufficient as a control. But what is the ‘right regulation’ and how do we avoid a situation where we veer into the realms of severe internet-censorship? Do law-makers have the necessary understanding to be able to regulate highly complex, global platforms? What category should social media platforms fall under for regulation? Are they media outlets or are they publishers?

It is, in many respects, irrelevant whether the data obtained by Cambridge Analytica swung the election in favour of Trump, or whether similar tactics were used in the Brexit referendum. The arguments for regulation were strong long before these elections. The power that companies like Google and Facebook have is, and has been for a long time, immense. Algorithms used on these platforms can influence public debate, play to certain emotions, and provide tools for malicious actors. The unprecedented level of data, and the lack of transparency around who is targeting who online, should be reason enough to consider the need for strong laws. Whilst there is strong regulation coming in the European Union in the form of GDPR (General Data Protection Regulation), it is clear that the areas of concern are far broader than this.

Offline, we have laws in place to prevent the concentration of too much power in the hands of the few. Whether that it is in politics, in the media, or in business. Will self-regulation today by existing companies be enough to ensure protection for when the future Facebook comes along tomorrow? The USA is now beginning to have a serious conversation about social media regulation, is it time we discussed it here in the UK, too?

Areeq Chowdhury is the Chief Executive of WebRoots Democracy.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s