Jonathan Zittrain Explores “How to Fix Twitter and Facebook”Publication Date: June 29, 2022 7 minute read
The risk of private ownership of the public square is that one person’s views could end up privileged over all others. The risk of public ownership of a public square is that, given social media’s innumerable and inevitable controls over which speech to favor, those in government with oversight could unduly exercise that power over public speech—exactly the situation that the First Amendment was drafted to prevent. Understanding each of these dangers can point us toward a third option.
- Jonathan Zittrain, professor of law and professor of computer science, Harvard University
In an article written for The Atlantic in early June, Harvard law professor Jonathan Zittrain presents his thoughts on how “community governance” can be nurtured and supported through practices and technologies to address the content moderation challenges of online social media sites. “How to Fix Twitter and Facebook” discusses the risks with private ownership or government-hosted social media sites; and then explains the values, as well as potential pit-falls, of looking to established groups and their invested members to regulate the behavior of all who participate in the groups’ online activities.
Below are a few excerpts from “How to Fix Twitter and Facebook” by Jonathan Zittrain (The Atlantic, June 9, 2022).
The Risks of Private Ownership Over Online Social Media
Professor Zittrain refers to Twitter and Facebook as – “two very significant pieces of the world’s digital speech infrastructure.” At the time of this article’s publication (June 9th, 2022), Elon Musk was in an on-again, off-again takeover bid of Twitter; and Mark Zuckerberg holds veto power as CEO of Meta, Facebook’s parent company.
These platforms aren’t just offering run-of-the-mill merchandise and services, as most other businesses do; they carry and shape incalculable quantities of civic speech that can set the agenda for traditional media. For these companies to be held in completely private ownership creates real risks, no matter who the owner is.
In the case of private ownership, harnessing a powerful platform to a single person’s preconceived political agenda risks misinforming the public, with no mechanism for an internal check or pushback. And this can occur in ways both direct and subtle—not just using the platform to knowingly tell lies, but also carefully elevating individual purported truths to paint a picture that amounts to a lie. Propaganda can work, and when it does, it serves the interests of its creators rather than of those who believe it and then act upon it.
Is a Government-Operated Social Media a Better Option?
Professor Zittrain explains that the “government works best in a speech ecosystem if it can broadly empower everyone—for example, by providing inexpensive municipal broadband—and is otherwise constrained, severely and appropriately, by the First Amendment in what speech it can limit.”
A government-hosted Twitter or Facebook would create a circumstance in which every content-moderation decision could justifiably prompt a federal case, with little public consensus on what the right outcome should be. That’s why government-run social media is rightly a nonstarter, even as solo private ownership is so ill-fitting.
A Third Option, Community Governance
Community governance occurs whenever a group of people act together in affirmative concert. Professor Zittrain provides examples: “It occurs as people pass a beer along a row at a baseball game; it can also be seen within a classroom or a jury room. It can manifest at Burning Man, in a stuck elevator, and among communities of worship. It’s in bowling leagues, Rotary clubs, and friendly poker games. It can be as chaotic as a spontaneous protest or as orderly as a self-forming queue at a bus stop.”
Such cooperation emerges when people don’t expect ready recourse to any outside authority. Instead, they try to work out their problems with one another, or to pursue common opportunities, in ways that get past their fear or distrust of others. Where this type of organization works, often by starting small, groups develop new norms to help them grow without blowing up.
Of course, it doesn’t always work. … For example, a natural disaster spurs mutual aid in some circumstances and sparks violence in others. Social media is designed to elicit some social behaviors over others, usually those that result in maximal engagement with the platform, and this has very little to do with whether people find trust in one another, or come away better or worse informed. At their best, online platforms have facilitated life-changing friendships, including for people who might lack a comfortable community in the physical places where they live, work, or study. At the same time, a medium built around short posts amplified to large audiences on the basis of outraged reaction can, unsurprisingly, reward and bring out the meanness in people.
Over the years, we’ve seen community governance working in the online world, from little glimmers to real beacons: LiveJournal allowed anyone to be a diarist; Couchsurfing facilitated house-sharing without any commercial element (at least until, under pressure from AirBnb, it went for-profit); Wikipedia has created small communities of editors one encyclopedic article at a time; and Reddit, at its best, has enabled topical communities to form without lumping all of its 50 million daily users into one unbounded feed. Community governance thrives through practices and technologies that let small groups form, with frontline content moderation from leaders who themselves are long-term members, know the group’s norms, and can enlist its help in reinforcing them. This way of doing governance also involves tools to help people address privacy concerns by sharing their identities in a safe partial manner, whether they’re participating in a group for cancer survivors, for HVAC repair people, or for anime enthusiasts.
Community governance can help draw difficult lines, and do so in a way that confers legitimacy on the participants’ decisions. That’s why I’ve proposed, several years ago in light of Facebook’s declining to evaluate its torrents of political ads for truth before running them, that high-school students, as part of their graded coursework, work together to judge political ads slated to run on social media. The students can explain their reasoning (with any dissents) for what flunks a disinformation test, and have their judgments stand, one ad at a time. A popular platform under less centralized control can also lower the stakes of moderation decisions. For example, deplatforming Donald Trump needn’t be a top-down, all-or-nothing decision but rather an emergent phenomenon among some audiences but not others. (Limited community governance on this issue is already happening organically on Twitter to an extent, with many of his enthusiasts tweeting the banned former president’s off-site pronouncements into their timelines.) More dispersed, self-governing platforms would avoid the phenomenon of a Person or Topic of the Day creating site-wide pile-ons. Once we escape the false choice between entirely private and government-run, with decisions instantly universal, new possibilities open up.
Can we govern ourselves? Can we trust strangers? These questions go to the heart of a functioning civic society. No answer is preordained, but getting to a good one requires building distributed architectures, online and off, to foster cooperation among the many and to contend with the few who want to wreck it. Simply hoping either that the right person buys Twitter and imposes more enlightened control over its users’ behavior, or that government authorities can successfully regulate billions of everyday exchanges among people, seems much more wishful than the idea of making community governance work where we can.
Read the full article on The Atlantic website, “How to Fix Twitter and Facebook.”
Explore Professor Zittrain’s Thoughts on Community Governance and Content Moderation Further:
- “Twitter’s Least-Bad Option for Dealing With Donald Trump” by Jonathan Zittrain (The Atlantic, June 26, 2020)
- “A Jury of Random People Can Do Wonders for Facebook” by Jonathan Zittrain (The Atlantic, November 14, 2019)
- “A Mutual Aid Treaty for the Internet” by Jonathan Zittrain (Brookings Report, January 27, 2011)
- “The Web As Random Acts of Kindness” TedTalk by Jonathan Zittrain, 2009.
About Jonathan Zittrain
Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library, and Faculty Director of the Berkman Klein Center for Internet & Society.