Disinformation and the 2019 UK General Election

By Maria Benlaiter.

On January 21st, WebRoots Democracy hosted its first event of the new year, discussing the recent general election, fittingly at the Houses of Parliament. We were joined by Darren Jones MP (Labour Party), Charlotte Jee (MIT Technology Review), Carl Miller (Centre for the Analysis of Social Media), Andrew Lewis (University of Oxford), and the writer, Yassmin Abdel-Magied. Areeq Chowdhury from WebRoots Democracy, chaired the discussion which centred on disinformation strategies deployed during the election, their implications for democracy, and the political reforms required. Each speaker highlighted issues they saw as central to the dialogue, then took questions from the audience.

Areeq began by contextualising the extent to which disinformation could affect elections, as well as general political views. One example he gave was the Future Advocacy deepfake videos of Boris Johnson and Jeremy Corbyn endorsing one another for Prime Minister, released during the election campaign. In the videos the two leaders identified themselves as deepfakes, however this would not be the case if the creator wanted to deliberately deceive their audience.

Disinformation vs Misinformation

Charlotte usefully opened her statement by making the distinction between ‘disinformation’ and ‘misinformation’, terms often used interchangeably. She outlined, ‘misinformation’ is incorrect information, while ‘disinformation’ is loaded with more malicious intent. It’s not only incorrect, but it is used by people aware of its inaccuracy who intentionally mislead their audience. The problem of ‘fake news’ is not a new phenomenon, despite the specific term’s more recent coinage. However, it has been exacerbated by the speed at which information can be shared, and supercharged by algorithms which target specific online users with ‘fake news’ which reinforce their biases and ‘evidence’ existing views.

Charlotte Jee

Andrew also highlighted confirmation biases and linked them to the deepfakes. As some particularly attentive viewers commented, there were some inconsistencies with the fake images of the two party leaders and their real mannerisms. But, Andrew explained, someone who agrees with the content is more likely to overlook the incongruity of a deepfake.

The use of deepfake technology could escalate the existing societal discord if further developed and used to propel disinformation, especially as people are more likely to believe anything if there’s a video. As the age old saying goes, if you didn’t snap[chat] it, it didn’t happen.

‘Dark influence’

Dark influence is the idea that there are forces which intentionally make some things more visible than others. While military warfare developed considerably during the two world wars, the doctrine of targeting the home front through influence became more grandiose during the Cold War, and was called ‘active measures’. From the 1990s onwards it has become much easier because of the internet. Militaries did an epochal pivot at that moment, they went from thinking of information as a tactic of war, to thinking of it as a theater where conflict happens within – air, sea, land and information are the four modern theatres of war.

Carl argues that campaigns which appear to propel false information are not actually trying to lie or making truth claims. The end goal is influence – fake news or lying is a tactic in the fight for influence. They’re not trying to change your mind but trying to make you angrier about something you already think – making some information more visible – causing you to target a particular person through the beliefs you already have; a lot of it is to do with cherry picking facts.

Do tech companies benefit from ‘fake news’?

Companies like Facebook don’t remove content because they are worried of creating too many false positive, Carl contended. On every platform the priority is growth not revenue; so they are easy to join, easy to use, and have creation systems optimised for engagement. The platforms themselves are not trying to disseminate fake news, or mould their users into extremists, their aim is to keep users on the app as long as possible, and this in turn generates advertising revenue. However, fake news merchants will engineer false information and those views will be amplified to people who have already shown some interest.

Arguably, if Facebook pages or Twitter accounts which share fake information are the main source of news for a large portion of the population, especially around election time, users are likely to spend more time on the respective social media platforms. The companies may not intentionally want to spread disinformation, but it’s their algorithms which peddle the content to the most susceptible users.

Platforms engineered to grow really quickly are not good for our political health.

Preserving the truth from the relentless pair: ‘fake news merchants’ and algorithms

Carl provided a strong response and some practical rules of engagement for engaging on social media. He argued that we, the general population, are not the victims of these campaigns, we’re the willing participants. Forces pushing ‘fake news’ conduct a lot of research into dopamine studies and behavioural economics. The ultimate aim is to activate our base psychological proclivities – advertisers, for example, look for immediate responses to their content. ‘These cognitive biases that are being exploited is who we are: we’re lazy and we’re habitual and we look to confirm our view of the world.’

He outlined ‘7 rules of engagement’ to protect ourselves from online manipulation, which have nothing to do with source-checking and everything to do with behaving consciously.

Yassmin advocated for greater awareness while using social media platforms, where much of the disinformation is disseminated. ‘We don’t actually have to be on any of these platforms, we’re there for whatever benefit we think we get out of them. But there are people who want to create chaos and a lack of trust, so we need to be more intentional with how we engage’.

Charlotte added: ‘Don’t quote tweet the thing you don’t like, telling everyone how much you don’t like it. All you achieve is increasing the exposure of that tweet.’

We are not the victims of these campaigns, we are the willing participants.

How should social media be regulated? What should the government be doing to address the issue of disinformation?

There are a lot of questions that form the response to this, and a lot of thought needs to be given to how regulation could logistically work. As Charlotte put it: ‘Who appoints? Who sits on that body? Who decides what politicians are allowed to say? Why can’t Facebook recognise and stop coordinated behaviour if it has such advanced moderation systems?’

There are some impartial institutions that exist to record empirical questions, but if we get to a point where the public is suspicious of those stats because of the proliferation of fake news, we have a big problem, she added.

The current discourse around the regulation of tech companies feeds into the longstanding debate on how much the government should govern private companies, and the actions of citizens. With many nations’ legal systems having struggled to keep up with the rapidly changing digital landscape, tech giants have become almost self-governing, while having a previously inconceivable impact on civil society.

When prompted to suggest solutions, Charlotte explained that tech companies will tell you the answer is ‘more tech’. Facebook’s Instagram is looking at removing likes, which is said to have many benefits, one of which being that users will not know how popular a post, or the messaging behind it is. There has also been a suggestion to develop artificial intelligence to detect deepfakes and make viewers aware of the video’s nature. Politicians and regulators should look into ‘breaking up’ companies to decrease the concentration of power, she added. Currently, Facebook owns the widely popular Instagram, WhatsApp, and a virtual reality company, among others.

Darren, who is the co-chair of the Parliamentary Information Communications and Technology Forum, suggested that the power of enforcers responsible for curbing fake news should be reviewed. Currently enforcement is limited, with the maximum fine for missteps being £20,000. Andrew agreed, advocating for punitive measures to be taken on people spreading disinformation and a crackdown on advertisers and other parties benefiting from fake stories financially.

‘Trust’

A recent Edelman report revealed a trust paradox which could apply to the recent election. With economic growth there comes greater trust in institutions. However, with larger income inequality, there is no trust in anything – as Yassmin poetically put it: ‘The sources of information, the rivers are poisoned. It’s so hard to purify that, it’s not an individual problem when the streams of information quality are toxic.’

83152899_2670252626425388_5580372426971152384_n

Having more conversations offline with people who have different views is one way to address the growing polarisation that has seeped from our online bubbles into reality, she recommended. ‘We need to slowly regain trust in society and we need to have these small conversations with people we don’t agree with.’

We’d like to thank all of our wonderful guests and attendees! Keep an eye out on our website and social media channels for upcoming events and other ways to get involved in our work.

Maria Benlaiter is a volunteer with WebRoots Democracy.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s