ACADEMIC ARTICLE SUMMARY
Online Public Health Misinformation, and How to Tame It
Article Source: Harvard Journal on Legislation, Vol. 60, pp. 1-58, 2023 (forthcoming)
Publication Date:
Time to Read: 3 minute readARTICLE SUMMARY
Summary:
Public health misinformation (PHM) causes substantial harm. Online platforms’ efforts to combat PHM are not sufficiently effective. “Soft” regulation such as governmental codes of conduct could help control PHM.
POLICY RELEVANCE
Policy Relevance:
Governmental action will help address PHM. Regulation of algorithmic rankings might not violate free speech rights.
KEY TAKEAWAYS
Key Takeaways:
- Health claims can be reviewed to see if they align with the best available scientific information, making PHM easier to identify than some other types of misinformation.
- Platforms' efforts to combat PHM included fact-checking, content moderation, promotion of reliable information, and user sanctions, but these efforts fell short because of structural limitations.
- Platforms' main business interest lies in attracting more users to content.
- Platforms need not obtain health information from authoritative sources.
- Determining how much false speech online should be tolerated is a political question.
- Broad laws restricting PHM might violate free speech rights, as the Supreme Court has ruled that some false speech is protected by First Amendment free speech rights.
- Government could use "soft regulation" to address PHM, calling for platforms to self-regulate, although “soft regulation” can violate Constitutional rights if the government threatens the platform with more restrictive laws or another state action.
- Governments can provide platforms with a voluntary code of conduct such as those used in the European Union; the EU’s PHM code calls for platforms to:
- Invest in technology to prioritize accurate information.
- Make misinformation less visible.
- Report on and make transparent their efforts against disinformation.
- Take action against inauthentic user accounts and stop the monetization of disinformation.
- With “voluntary enforcement” (also called “inverse regulation”), agencies known as “Internet Referral Units” find content that violates the platform’s Terms of Service and ask the platform to remove it; such systems are used to combat terrorism and hate speech in the UK, Israel, and the EU.
- The proposed Filter Bubble Transparency Act (FBTA) would require platforms to offer an "algorithm free" option, as algorithms used to promote engaging content often promote misinformation.
- The Eleventh Circuit mistakenly ruled that a Florida law similar to the FBTA violated free speech rights by interfering with platforms' editorial judgment, failing to recognize the difference between content moderation and algorithmic amplification.
- Content moderation relies on machine learning, but always targets speech based on its subject matter or message, and regulation of content moderation is not content neutral; by contrast, algorithmic amplification is indifferent to content, ranking posts based on number of likes, friend relationships, and users’ past interactions with similar posts.
- The exercise of editorial judgment by newspaper editors is different than the mathematical operation of a ranking algorithm; regulation of algorithmic ranking could survive First Amendment analysis under intermediate scrutiny, or, for PHM, strict scrutiny.
- Generally, the government could show a substantial government interest in controlling misinformation.
- The government can show a compelling state interest in controlling PHM.