The 1996 Communications Decency Act (CDA) makes online platforms immune from liability for harmful content posted by third parties. Platforms should enjoy immunity only if the platform takes reasonable steps to prevent harm.
Removal of harmful content will not violate free speech rights.
- Section 230 of the CDA gave online platforms like Twitter a safe harbor from liability for content posted by third-party users; this grant of immunity is out of date and should be revised.
- Social media platforms benefit us, providing platforms for the Me Too and Black Lives Matter movement; however, these platforms also supported planning for the Capitol Riots, may enable terrorist recruiting, and facilitate the sexual exploitation of children.
- Section 230(c)(1) protects platforms from liability for harmful content posted by third parties, protection which the platforms need to remain in business.
- Section 230(c)(2) allows platforms to police sites for harmful content, but doesn't require the contents’ removal; this stops courts from classifying platforms as publishers, who are liable for all user-generated content.
- Platform owners rarely strive to remove harmful content as expected, as they benefit economically from harmful content without suffering significant harm to their reputations.
- The CDA should be reformed to provide that online service providers should be immune from liability for third-party harmful content only if the provider takes reasonable steps to address content known to be harmful; some courts have accepted this revised duty-of-care standard.
- Content that causes harm by advocating the violent overthrow of the government or by presenting obscene and child sex-abuse material is not protected by constitutional rights of free speech; a duty-of-care standard does not violate free speech rights.
- Responsible platforms would benefit from clear boundaries between their service and the harmful conduct of bad actors; by contrast, broader regulation would impose costs on all businesses, responsible or not.