Facebook – or Meta, as it now calls itself – has requested feedback from the public on its approach to handling COVID-19 ‘misinformation’. Or, rather, Meta has asked for policy advice in this area from its Oversight Board and the Oversight Board, in turn, is asking for comment from the public.
These are the different approaches Meta has asked the Oversight Board to consider:
Continue removing certain COVID-19 misinformation. This option would mean continuing with Meta’s current approach of removing content that directly contributes to the risk of imminent physical harm. Meta states that under this option the company would eventually stop removing misinformation when it no longer poses an imminent risk of harm and requests the Board’s guidance on how the company should make this determination.
Temporary emergency reduction measures. Under this option, Meta would stop removing COVID-19 misinformation and instead reduce the distribution of the claims. This would be a temporary measure and the company requests the Board’s guidance as to when it should stop using it if adopted.
Third-party fact checking. Under this option, content currently subject to removal would be sent to independent third-party fact checkers for evaluation. Meta notes that “the number of fact-checkers available to rate content will always be limited. If Meta were to implement this option, fact-checkers would not be able to look at all COVID-19 content on our platforms, and some of it would not be checked for accuracy, demoted, and labeled.”
Labels. Under this option, Meta would add labels to content which would not obstruct users from seeing the content but would provide direct links to authoritative information. Meta considers this a temporary measure and seeks the Board’s guidance on what factors the company should consider in deciding to stop using these labels.
The specific areas the Oversight Board is seeking “comment” from the public on are:
The prevalence and impact of COVID-19 misinformation in different countries or regions, especially in places where Facebook and Instagram are a primary means of sharing information, and in places where access to health care, including vaccines, is limited.
The effectiveness of social media interventions to address COVID-19 misinformation, including how it impacts the spread of misinformation, trust in public health measures and public health outcomes, as well as impacts on freedom of expression, in particular civic discourse and scientific debate.
Criteria Meta should apply for lifting temporary misinformation interventions as emergency situations evolve.
The use of algorithmic or recommender systems to detect and apply misinformation interventions, and ways of improving the accuracy and transparency of those systems.
The fair treatment of users whose expression is impacted by social media interventions to address health misinformation, including the user’s ability to contest the application of labels, warning screens, or demotion of their content.
Principles and best practice to guide Meta’s transparency reporting of its interventions in response to health misinformation.