A shot in the arm for platform integrity

Anti-vax misinformation has never been more pressing, but tackling it requires a new compact between platforms and policymakers

👋 Hi, it's Chris from the Tony Blair Institute.

On Tuesday the EU and the UK simultaneously released proposals for the regulation of online platforms. President-Elect Biden has also announced a ‘National Taskforce on Online Harassment and Abuse’, so it looks like policymakers have a busy year ahead. Creating a proper architecture for accountability is the right approach, but the really tricky questions —about protecting freedom of speech, exactly what content is allowed, and who decides — remain to be convincingly answered.

In today’s newsletter we’re taking a look at Covid-19 anti-vax misinformation, social media use and tech regulation. Dealing with this challenge may hold wider lessons for online platforms, and global co-operation on it goes hand-in-hand with rolling out a vaccine.

🙏 If you enjoy this post please take a moment to share it on social media or forward it to someone who might find it interesting. Thank you!

🙋 Were you forwarded this issue? Subscribe now:

by Benjamin Tseng and Chris Yiu


Despite broad agreement in the scientific community about the public-health benefits of vaccination, unfounded concerns around vaccines have achieved widespread acceptance. Many trace this back to Andrew Wakefield’s 1998 Lancet paper alleging a link between MMR vaccines and autism in children. But despite the paper’s subsequent retraction and Wakefield’s eventual disgrace, concerns about vaccine safety persist.

Experts have linked these worldviews to a dangerous resurgence in measles and warn that continued rise in anti-vax sentiment may pose a danger to those with compromised immune responses and people who cannot be vaccinated.

Now, in the Covid-19 era, combatting anti-vax sentiment and the vaccine misinformation at its core has taken on new urgency.

The starting point is understanding the outsized role that online platforms play in how people access news. Published studies have shown that these platforms can unwittingly serve as a critical conduit for misinformation. A focus on engagement can end up elevating emotionally appealing misinformation over valid information, and fostering filter bubbles and echo chambers – all of which can strengthen and reinforce fringe worldviews.

Polling published as part of the TBI Globalism Project shows that belief in a range of conspiracy theories is surprisingly high across Britain, the US, France and Germany. When cut by social media use, in France and the US, those individuals who reported using social media at least daily to discuss or read current affairs on social media, were more likely to believe in vaccine or Covid-19 related conspiracy.

Despite this evidence and statements from online platform executives on their opposition to anti-vax misinformation, online platforms have often been hesitant to act.

In some cases, the collision between internet business models and complicated problems like misinformation can be difficult to resolve. This is evident in the shifting policies that platforms like Facebook have had on vaccine misinformation — from banning ads that promote vaccine misinformation, to clarifying this only pertained to ads that specifically contained vaccine misinformation but not those which promoted anti-vax content, then banning all ads that discourage vaccination but not antivax content, to finally aiming to remove all false claims about vaccination.

Platforms are also wary of the cost and feasibility of regulating against misinformation at scale. Public mistakes from normally authoritative health sources (like the WHO’s and CDC’s earlier statements on the value of mask-wearing) and the lack of good artificial intelligence and natural language processing technologies suited for understanding the validity of content (as opposed to its meaning) means platforms will need complex human moderation apparatuses — with the associated headaches around setting content policies, handling appeals, and managing moderators.

And many platforms do not want to be viewed as censors. For some this is a philosophical stance, as many social media platforms were founded to decentralize communication away from traditional media gatekeepers. But, it is also a practical one given both user expectations around privacy and freedom as well as fear of excessive political scrutiny over what may be seen as biased content policy.

Promote and protect

To promote vaccination and protect public health, policymakers need to recognise and take advantage of the key role that the internet plays in the spread of anti-vax misinformation.

First, we need a regulatory framework that does a better job of balancing harm reduction with freedom-of-speech concerns, and recognises that the most effective means to respond to misinformation will vary by platform and over time. Strong transparency and procedural review and audit systems, rather than ironclad rules, will help us adapt to changing technologies and situations whilst also fostering trust with the public. 

Second, governments should make it easier for online platforms to act against anti-vax misinformation. Public-health agencies should provide much more accessible public-health information for platforms to amplify and use for fact-checking. Governments should also exercise their regulatory authority and convening power to encourage platforms to experiment with new strategies as well as share learnings and data they have gathered on anti-vax misinformation with researchers and each other.

Lastly, public-health officials should leverage online platforms to develop social-mobilisation campaigns around vaccination. Public-health agencies should also take advantage of online platforms to monitor anti-vax conversations and adapt their messaging in response to new talking points and themes. When strategically advantageous, this might extend as far as participating directly in order to reach “undecided” or “soft sceptic” individuals who might otherwise become further radicalised.

With vaccines for Covid-19 now within reach, vaccine misinformation may be the final big hurdle to clear. Policy that directly and flexibly addresses the role online platforms have to play in curbing it will be essential to putting the pandemic behind us.

Read the full policy report

Four key charts

As part of the TBI Globalism Project, we have published polling on belief in conspiracy theories cut by self-reported use of social media to discuss or read about current affairs. The polling provides some interesting findings on the effect of social media on belief in conspiracy and some pointers for where further research is required to explore the link between type and amount of social media use on belief in conspiracy theories. You can find more about the research and some limitations of this evidence here.

1. Anti-vaccination conspiracies

There is a high level of belief in vaccine conspiracy theories both for those who use social media to discuss or read about current affairs frequently, and those who do not. Notably, in the UK the difference was negligible.

2. Conspiracies about pharma companies

When this poll was conducted in July-August 2020, a significant minority across all countries surveyed believed pharmaceutical companies were deliberately delaying the development of a vaccine in order to drive up the price.

3. Coronavirus conspiracies

A small minority of heavy social media users in all countries surveyed think Covid-19 is a myth, made up by powerful forces.

4. 5G conspiracies

The claim that Coronavirus is caused by 5G is a prominent conspiracy theory, but only a small, though non-trivial, minority of people in all countries surveyed believe it to be true.