Posted by Sam Fenny - Memes and headline comments by David Icke Posted on 26 August 2022

What Facebook Should Do About ‘Covid’ Misinformation – Absolutely Nothing

Last night I submitted a response on behalf of the Daily Sceptic to the request from Meta’s Oversight Board for comment on the company’s COVID-19 misinformation policy. I tried to keep it fairly short and punchy.

The Daily Sceptic’s Response

I’m not going to respond to the questions directly. The way they’ve been drafted, it’s as if Meta is taking it for granted that some suppression of health misinformation is desirable during a pandemic – because of the risk it might cause “imminent physical harm” – and what you’re looking for is feedback on how censorious you ought to be and at what point in the course of a pandemic like the one we’ve just been through you should ease back on the rules a little. My view is that suppressing misinformation is never justified.

The first and most obvious point is that it’s far from obvious what’s information and what’s misinformation. Who decides? The government? Public health officials? Bill Gates? None of them is infallible. This was eloquently expressed by the former Supreme Court judge Lord Sumption in a recent article in the Spectator about the shortcomings of the Online Safety Bill:

All statements of fact or opinion are provisional. They reflect the current state of knowledge and experience. But knowledge and experience are not closed or immutable categories. They are inherently liable to change. Once upon a time, the scientific consensus was that the sun moved around the Earth and that blood did not circulate around the body. These propositions were refuted only because orthodoxy was challenged by people once thought to be dangerous heretics. Knowledge advances by confronting contrary arguments, not by hiding them away. Any system for regulating the expression of opinion or the transmission of information will end up by privileging the anodyne, the uncontroversial, the conventional and the officially approved.

To illustrate this point, take Meta’s own record when it comes to suppressing misinformation. In the past two-and-a-half years, you have either removed, or shadow-banned, or attached health warnings on all your social media platforms to any content challenging the response of governments, senior officials and public health authorities to the pandemic, whether it’s questioning the wisdom of the lockdown policy, expressing scepticism about the efficacy and safety of the Covid vaccines, or opposing mask mandates. Yet these are all subjects of legitimate scientific and political debate. You cannot claim this censorship was justified because undermining public confidence in those policies would make people less likely to comply with them and that, in turn, might cause harm, because whether or not those measures prevented more harm than they caused was precisely the issue under discussion. And the more time passes, the clearer it becomes that most if not all of these measures did in fact do more harm than good. It now seems overwhelmingly likely that by suppressing public debate about these policies, and thereby extending their duration, Meta itself caused harm.

Which brings me to my second point. Because there is rarely a hard line separating information from misinformation, the decision of where to draw that line will inevitably be influenced by the political views of the content moderators (or the algorithms designers), meaning the act of labelling something “mostly false” or “misleading” is really just a way for the content moderators (or the algorithm designers) to signal their disapproval of the heretical point of view the ‘misinformation’ appears to support.

How else to explain the clear left-of-centre bias in decisions about what content to suppress? We know from survey data that content that challenges left-of-centre views is more likely to be flagged as ‘misinformation’ or ‘disinformation’ and removed by social media companies than content that challenges right-of-centre views.

According to a Cato Institute poll published on December 31st 2021, 35% of people identifying as ‘strong conservatives’ said they’d had a social media post reported or removed, compared to 20% identifying as ‘strong liberals’.

Strong conservatives were also more likely to have had their accounts suspended (19%) than strong liberals (12%).

This clear political bias is one of the reasons suppressing so-called conspiracy theories is counterproductive. One obvious case-in-point is Facebook’s suppression of the lab leak hypothesis in the first phase of the pandemic, which the Institute for Strategic Dialogue described as a ‘conspiracy theory’ in April 2020. This censorship policy was so counterproductive, that today even the head of the WHO is reported to believe this ‘conspiracy theory’.

Okay, that particular conspiracy theory is very probably true. What about when a hypothesis is clearly false, such as the claim that Joe Biden stole the 2020 Presidential election? That’s still not a reason to censor it. That particular conspiracy theory, energetically promoted by Trump himself, played a part in the violent protests by Trump supporters that took place in Washington on January 6th 2020 and for that reason anyone sharing this theory on Facebook will see their posts instantly removed and they risk being permanently banned from the platform.

Read More: What Facebook Should Do About COVID-19 Misinformation

The Trap

From our advertisers