By Areeq Chowdhury.
Social media relays in black and white text the best and worst messages about the world. These messages are sent directly to us in our homes on devices we are conditioned to become addicted to. Messages which generate the most division, generate the most engagement, and therefore generate the greatest reach. This reach is important to social media companies as it is the service which creates profits for shareholders. A greater reach equals a greater number of potential customers. The platform with the greatest reach can charge the highest prices for their advertising space. This, fundamentally, is how social media works.
We, as users, follow our human instincts to connect with others by conversing on topics which interest us the most: sports, politics, celebrity culture, or whatever else it may be. What better way to do so than on platforms which allow us to connect seamlessly with others around the globe? Social barriers we may face to conversations in the offline world such as the way we look, the way we dress, and the way we speak are irrelevant on these online platforms. It is easier to be confident online than offline because these barriers do not need to exist. This confidence emboldens us to make friends, own our identities, and speak out against injustices.
Platforms are unable, however, to make judgement calls on which emboldened confidence is good and which emboldened confidence is bad. After all, they are just websites and mobile phone applications at the end of the day and good vs bad has been an ongoing philosophical debate dating back to the dawn of time. Wars have been fought over this question and continue to be fought today.
As we should expect, those who we deem to be bad as well as those we deem to be good, are active users of social media. The emboldened confidence induced by social media platforms applies as much to them as it does to us. Those who espouse and promote messages which degrade others based on race, religion, sexuality, gender, and other peaceful aspects of victims’ humanity, equally find confidence through social media. They, too, are emboldened to speak out and connect with others. This, in turn, strengthens their resolve, network, and organisation. There exist, today, countless online forums, chatrooms, and groups dedicated to discussing the best ways to degrade others, even if it is not phrased in such overt terms.
Individuals which espouse such views are now better equipped than at any point in the history of mankind to be able to do so. So, in theory, are the victims of such views. However, given the uphill struggle and lack of collective, real-world power experienced by victims, social media networks are likely to advantage members of the empowered majority who are seeking to further entrench that power. Hence, social media platforms should not necessarily be viewed as entities which equalise power imbalances.
This raises big questions for social media giants and society at large. Social media giants are unlikely to have ever had the intention of contributing towards negativity in society. Their aims were likely financial and based in positive ideals (e.g. connecting people or enabling access to information). How, then, can they prevent their platforms from being used to promote negative outcomes related to discrimination, extremism, disinformation, or child abuse? Society at large – governments, media, and the public – have rightly been demanding answers to this question. This answer is often centred on actions which can be taken by social media companies themselves rather than actions which can be taken by society. Proposed solutions have therefore involved making adjustments and tweaks to social media networks. Examples of these include creating the ability to ‘mute’ or removing the ability to ‘like’. Another is the promotion of ‘fact-checking’ labels. None of these solutions are sufficient to combat the problems we see playing out online. They are not bold enough and do not begin to reach the heart of the issues we face.
Throughout history, big challenges have required big responses. The National Health Service was the response to a lack of universal healthcare. The minimum wage was the response to low pay. The state pension was the response to old age poverty. These responses, whilst intricate and imperfect, were bold and ambitious. In addition, they involved significant expenditure.
The challenges involved with social media are different, of course, but they demand a similarly bold response, rooted in significant expenditure. This response, I believe, should be the establishment of a ‘Civil Internet Tax’ levied against large social media companies. In this essay, I provide an overview of the tax, including the moral, business, and political case for it, as well as further discussion on the challenges involved with combating bad things on social media. My aims for this are threefold. First, I wish to provide clarity to myself on how this idea could work. Secondly, I wish to convince you, the reader, that this is an idea worth pursuing. Finally, I wish to move the debate beyond online interventions and regulation, towards a focus on taxation and offline responses.
Origins and overview of the Civil Internet Tax
The Civil Internet Tax would be a new tax levied on large social media companies to raise money to fund offline anti-discrimination and digital literacy initiatives. In order to protect startups and smaller companies, the tax would only apply to platforms which have at least a certain number of users in a country’s jurisdiction (e.g. one million). The tax would be akin to a simple tariff and be levied based on the number of users within a jurisdiction and not on profit or revenue. This would make it difficult for tax avoidance initiatives to be used, whilst at the same time recognising the value of each individual user. Important considerations include whether the platform generates significant revenue or are not-for-profit. Another is how to determine the number of users within a jurisdiction (e.g. through self-reporting or external audits).
Example: Social media giant, Flutter, generates significant global revenues and self-reports that it has 30 million users in the UK. The UK Parliament has voted to set the new Civil Internet Tax at a rate of £1 per user, per year. This rate is applied to Flutter and raises £30 million in year one. This money is ring fenced for expenditure directly related to combating online harms (e.g. anti-discrimination and digital literacy initiatives) and cannot be spent towards other areas of public expenditure (e.g. transport and housing). End Digital Exclusion, a community organisation focused on educating elderly users on responsible social media use, successfully applies to the Government’s Civil Internet Fund to run projects educating elderly internet users on how to fact-check claims online as a means to reduce the impact of disinformation. Evaluation of their project finds that participants become 75% less likely to share a ‘fake news’ story online.
The idea for the Civil Internet Tax came about as a result of our research in 2018 examining the rise of online abuse in political debate. This work involved interviewing victims of severe online abuse, organising roundtables with technologists and campaigners, and running focus groups with young women and people of colour. In addition, we carried out manual and automated sentiment analysis on 53,000 tweets targeted at prominent journalists and politicians in the UK. The final report, Kinder, Gentler Politics: Tackling online abuse in political debate can be read here. Our policy recommendations were designed around four key themes: sanctions, reform, education, and oversight. Our headline recommendation, the Civil Internet Tax, cut across all four of these key themes and underpinned many of the other recommendations contained within the report.
Pigouvian taxes
The concept of using tax as a lever to tackle bad things is not new and in economics is known as a ‘Pigouvian’ tax, named after the English economist, Arthur Cecil Pigou. Pigou is most renowned for his work developing the idea of economic ‘externalities’. An externality is a cost or benefit which affects a third party who did not choose to incur that cost or benefit. An example is CO2 emissions produced by a factory. This would be described as a negative externality as it contributes to global warming, leading to negative consequences for wider society. An example of a positive externality is the reduction in traffic congestion caused by more people choosing to walk to work. In general, a government would wish to promote activities which create positive externalities and take action to combat negative externalities.
A Pigouvian tax is an example of an action which is taken to combat a negative externality. These taxes are applied against individuals or businesses which engage in activities which create adverse side effects for society. It is argued that the cost of these negative externalities are not borne by the producer but by society. A Pigouvian tax is therefore intended to redistribute the cost back to the producer. These taxes can have two key effects. The first is to incentivise the producer to take action themselves to reduce the production of the externality and the second is to help society deal with the consequences.
With the Civil Internet Tax, the social media company is the producer and the externalities are the so-called ‘online harms’ (a term used by the UK Government to refer to bad things online) such as disinformation and abuse. These externalities can exacerbate mental health problems, encourage hate crimes, and undermine trust. These are costs which are not currently borne by the social media giants (the producers), but by governments and individuals. The Civil Internet Tax would, therefore, redistribute this cost back to the social media company. In this case, it would be accepted that there is little which social media companies can do to combat these harms. Instead, the second key effect would take place with society better resourced to address the harms through offline initiatives.
There are flaws, of course. One such flaw is that the tax may disincentivise the companies to take any action themselves. However, as with any policy, it would be one of many measures to combat online harms. In addition, market forces would still play their role in incentivising companies to clean up their platforms. Consumers will not want to spend time on a platform where child abuse content is rampant, for example. There are certain minimum standards which all platforms should want to uphold, regardless of any government intervention. The primary purpose of the tax is to help address issues which the platforms cannot or will not act upon.
Challenges involved with social media regulation
To date, much of the debate on this topic has been focused on how to force social media companies to ‘take action’ themselves. In particular, there has been a debate on whether these companies should be viewed as ‘platforms’ or ‘publishers’. If they are platforms, then they are no more responsible for the content uploaded onto their sites than a brick wall would be for the graffiti sprayed upon it. If they are publishers, then they are treated in a similar manner to newspapers or television outlets and are held liable for the content they host. This debate has contributed to years of inaction on all sides in combating online harms. The Civil Internet Tax does not treat companies as platforms or publishers, but as producers.
The main challenge with social media regulation is the sheer amount of content. Consider a library. A library is able to store and share thousands of books. The reason it can do so is because it maintains a record of every book being checked in and being checked out. In addition, it has trained librarians which can handle the amount of work required for maintaining the checking in and out of books. To recuperate the costs from books going missing, the library levies fines against users who return books late. Finally, the entire cost of running the library is supported through local taxation.
Social media platforms are similar in the sense that they are storing and sharing content. The difference, however, is the amount of content involves millions of posts uploaded on a daily, if not hourly, basis. The content is also unknown before it is posted and can come in many different languages. To be able to handle this amount of content (to ensure it meets the company’s policies), the company will need to invest heavily in its librarians (human and digital content moderators) and even then, it is likely to be insufficient. In addition, as a private entity, it relies on private funding (through advertising). It’s financial incentives therefore differ from the public good of a library. Social media platforms are there to serve paying customers (advertisers) who want greater reach and greater engagement. This reach and engagement can be inhibited by safety measures and content moderation.
Conflicting incentives and the impossible-to-achieve task of mass content moderation is the crux of the reason we are where we are today. As a result, the main measures which have been applied by social media companies have been focused on filtering content and/or fact-checking it. Filtering content involves blocking content before it’s posted (e.g. if it contains known child abuse content), deprioritising content in a newsfeed, and placing ‘explicit content’ labels over posts. Fact-checking involves annotating posts with links to fact-checking services or trusted news outlets to provide context to a post. Other measures include reporting posts or users for malicious activity. These measures have all faced challenges in recent years, particularly when the user engaging in malicious activity is a high profile political leader. In these cases, company policies are often suspended in favour of ‘public interest’.
Retrospective vs proactive solutions
Currently, social media companies can only effectively deploy retrospective actions to combat harms online. With the exception of filtering at the point of upload, most measures (e.g. reporting tweets) occur after the event and are focused on damage limitation. In cases of ‘fake news’ or violent content, failing to act at the point of upload can mean it’s too late to contain the damage as content can be copied, stored, and distributed at rapid speed online. We saw this with the video footage of the New Zealand terror attack in 2019 and, more recently, with President Trump’s tweet on ‘looters’ and ‘shooters’. This is particularly problematic when it comes to instances of revenge pornography and videos of abuse – content, which under no circumstance, should ever be shared online.
There is therefore a need for proactive solutions to the challenge of online harms. What actions can be taken before a violent action occurs and before its subsequent upload onto the internet? What role can social media companies play in supporting efforts to prevent users engaging in malicious activities in the first place, and how can states engage these companies in doing so?
Education is one way of doing so. To prevent the spread of sexually transmitted diseases, we educate young people in safe sex and ensure protection is made readily available in society. Similarly, to reduce the number of road traffic accidents, we have designed a system of training and licencing to ensure that drivers can safely take to the roads. Can we learn from these examples and design a preventative approach which can ensure safe social media use?
This, essentially, is the thinking behind the Civil Internet Tax. It is an acknowledgment that social media is a place where harm can occur and an acknowledgment that the state will be required to intervene to protect its citizens. The real, long-term, solutions will need to take place in the offline world, requiring significant and continuous investment. Reporting posts, or hiding them, do not address the root causes of online harms and do not induce long term progress. For that, we will need to be much bolder and more ambitious.
Offline solutions
What then could these offline solutions look like? Are there solutions offline which can reduce discrimination and promote digital literacy? Ultimately these solutions will be for governments and civil society to work on. Top down examples may include investment on nationwide advertising campaigns, regulatory oversight, curriculum changes, public libraries, and policing. Community-led responses may include investment in digital literacy schemes, workspaces for racial justice campaigns, and support for interfaith activities. Some solutions may even appear, on the surface, to be entirely unrelated to online harms. For example, using the Civil Internet Fund to invest in youth clubs may seem unrelated but could be argued to be an intervention which reduces the likelihood of individuals engaging in dangerous activities which can spill onto the internet.
None of these solutions would bring about overnight change. Indeed, societal campaigns against discrimination in society, abuse, and violence are multigenerational. All we can hope to do as individuals and organisations is to contribute positively and meaningfully towards them to the best of our abilities. The aim for the Civil Internet Tax would be to bring about meaningful changes, no matter what speed these changes occur. The problem, currently, is that the ‘quick-fixes’ of filters and fact-checks do not, and cannot, bring about meaningful change on their own.
The moral, political, and business case
The introduction of a Civil Internet Tax fulfils a moral, political, and business case. The moral case is grounded in the recognition of the significant role which social media companies play in the dissemination of disinformation, the promotion of divisive rhetoric, and the spread of online abuse. This role is not necessarily accidental but arises, instead, through design and a failure to act early. Given their role in exacerbating the problem, there is a strong argument for them shouldering the financial burden required to solve it.
The political case is grounded in the need to be seen to be acting and to be seen to be delivering results. Politics is often the act of visible representation and lawmaking. Introducing a Civil Internet Tax is a good example of a visible intervention which can be measured and scrutinised in a way which cannot be achieved with existing interventions (consisting primarily of public statements for companies to ‘do more’).
The financial case is grounded in the fact that social media companies will need to invest large amounts of money if they are to be serious about combating these issues. The amount which will need to be spent on human and automated content moderation systems may end up far higher in the long-run than the cost of more effective offline measures which can be achieved through payment of the Civil Internet Tax.
Failing better
‘Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.’
This famous quote from the Irish novelist and Waiting for Godot playwright, Samuel Beckett, has become a mantra for Silicon Valley. It has helped foster an attitude which allows for failure and acknowledges that failure can contribute towards learning and ultimately help pave the way to progress. This mantra is a useful one and one which should be applied to the topic of online harms. Undoubtedly, many measures will fail in their ultimate task of eradicating disinformation, abuse, or whatever else it may be. This has certainly been the case for Silicon Valley executives who do not pursue perfection but have instead adapted their approach to these issues over the years. This attitude, however, has yet to permeate the walls of parliaments and governments who, out of fear of getting it wrong, have been paralysed in their response to the challenges of social media. Debates on platform vs publisher have inhibited action on areas where governments can already act, such as with the introduction of a Civil Internet Tax.
Whilst nothing is done, victims are having their lives destroyed, elections are being undermined, and division is tearing through society. It is no exaggeration to state that these are monumental challenges which will become tougher with each day, month, and year of inaction. The policy response from governments and civil society will need to be bold in their response. Timidity in public policy will not forge a path towards a civil internet and we do not need to spend time re-inventing the wheel. Taxation is a society’s main lever for change – we should not be afraid to use it.
Areeq Chowdhury is the director of WebRoots Democracy.
One Reply to “A Civil Internet Tax”