Life Outside the Bubble

By Matthew Gentzkow

Posted on April 17, 2020


Share

Illustration: People chatting on mobile devices

 

Digital media have been accused of causing many social ills. One of the most serious charges is deepening political polarization.

 

Early commentators like Cass Sunstein and Eli Pariser described how a world with vast selection of content and algorithmically tailored filters could trap individuals in bubbles of like-minded information where they would hear their own views and prejudices echoed endlessly back and rarely if ever encounter a conflicting view1.

 

Examples such as the “Blue Feed, Red Feed” graphic produced by The Wall Street Journal showed how close these predictions might be to reality. Although recent evidence suggests that the worst fears are likely overstated2, it also shows that political divisions are indeed deepening, and that content accessed through digital media can be an important contributor3.

 

The solution to this problem might seem obvious: change the algorithms to break the filter bubble. In the wake of the 2016 election, a raft of startups formed to offer tools to diversify peoples’ news diets, including Read Across the Aisle, allsides.com, echochamber.club, and Escape your Bubble. Many commentators have called for platforms like Facebook and Twitter to change their algorithms to give less priority to like-minded content4. The “contact hypothesis” - the idea that simply exposing one group to another can reduce divisions and hostility - is supported by a large body of literature.

 

As appealing as breaking our bubbles may sound, there are some reasons to question whether it will be an effective solution to polarization. The most obvious problem is that people read like-minded content for a reason. If Facebook or Twitter start filling peoples’ feeds with content from the other side, they may just ignore it. Demand for like-minded news may be driven by a bias toward confirmation, a genuine belief that like-minded sources are more trustworthy, or just greater interest in the stories those sources choose to highlight. Whatever combination of these factors is at play, it means that exposing people to diverse content is very different from getting them to actively and thoughtfully engage with it. Perhaps for this reason, none of the startups mentioned above seems to have gained large-scale traction and at least two of the four appears to be defunct.

 

Even more troubling, prior work suggests that exposure to cross-ideological experiences can sometimes produce backfire effects that deepen divisions5. It is not hard to imagine that forcing a committed liberal to sit and listen to Donald Trump speeches for an hour might increase rather than decrease the intensity of the liberal’s partisan ire. Handing a committed conservative a packet of liberal social-media memes might well have a similar effect.

 

What happens when digital media users are pushed outside their bubbles? High-quality evidence remains limited, but we are beginning to accumulate valuable data points. Here, I review two of the most compelling recent studies.

 

Red Bot, Blue Bot on Twitter

 

Among the largest published studies on cross-partisan social-media exposure to date is “Exposure to Opposing Views can Increase Political Polarization: Evidence from a Large-Scale Field Experiment on Social Media” by Christopher Bail and co-authors (PNAS 2018).

 

The authors began by building two customized Twitter bots, one conservative and one liberal. The conservative bot was programmed to retweet a random selection of content from accounts the authors had identified as among the most influential conservative “opinion leaders” on Twitter (including politicians, organizations, and commentators). The liberal bot was programmed to do the same for content from liberal accounts. The bots retweeted 24 posts per day.

 

Illustration: People chatting on mobile devices

 

Next, the authors recruited a sample of self-identified Republicans and Democrats who reported using Twitter at least three times per week. They surveyed these subjects about their views on a set of political issues, then randomly assigned them to either a control group or a treatment group. Treatment subjects were paid to follow the bot opposite to their ideology (conservative bot for Democrats, liberal bot for Republicans), and given additional incentives to pay close attention to the content of that bot’s tweets. They also completed a series of follow-up surveys that measured compliance as well as changes in the index of political-issue views. Control users completed the follow-up surveys but were not asked to change anything about their social-media behavior.

 

The results are not encouraging for supporters of bubble-bursting interventions. Not only did subscribing to the Twitter bot from the other side not significantly reduce the polarization of political views, it appeared to produce a backlash effect. This effect was small but highly significant for Republicans, with those exposed to the liberal bot shifting their views on the 7-item issue scale to be between 0.1 and 0.5 points more conservative depending on the estimation method. For Democrats, following the conservative bot produced a small and insignificant shift of views in the liberal direction.

 

Several points are important to note in interpreting this study. First, the sample consisted of heavy Twitter users with clear party attachments - hardly representative of US voters. It may be that backlash effects are especially likely among such engaged and committed partisans, and that effects would be different among those who are more moderate or who use social media less. That said, highly engaged partisans are those we would worry most about being affected by filter bubbles, so if cross-ideological content does not work for them it is unlikely to be a solution to the overall problem. Second, the partisan content assigned in the experiment may have been relatively extreme. It could be that more moderate content designed to appeal to both sides would have had a different effect. Finally, the outcome measure is an index of fairly broad political views such as “Government is almost always wasteful and inefficient” and “The best way to ensure peace is through military strength.” It could be that cross-ideological content could reduce some kinds of polarization even if it did not moderate these kinds of ideological views.

 

Liking Your Enemy on Facebook

 

A more recent study in a similar vein is “Social Media, News Consumption, and Polarization: Evidence from a Field Experiment,” a new working paper by Ro’ee Levy.

 

The author recruited a large sample of US Facebook users via Facebook ads. After completing a baseline survey, subjects were randomly assigned to a liberal or conservative treatment group, or a control group. The liberal treatment group was asked to “like” four liberal news outlets (e.g., MSNBC), an action which would lead more content from these outlets to appear in their Facebook feeds. The conservative treatment group was asked to “like” four conservative news outlets (e.g., Fox News). Neither group was given any incentives to do follow through with this suggestion, but roughly half of both groups complied. The design was stratified so that both liberal and conservative subjects were included in both the liberal and conservative treatments – in other words, subjects could be treated with content either opposed to or consistent with their own ideology.

 

Outcomes are measured in three ways. First, participants had to log in to the baseline survey with their Facebook accounts and give permission to the author to observe the outlets they “liked” and the posts they shared. Second, some participants installed a Google Chrome extension that allowed the author to observe the content of their news feeds and the articles they read. Third, participants completed a final survey two months after the end of the experiment. The main question is how being assigned to like cross-ideological outlets affected polarization. 17,629 subjects completed the surveys, 8,080 subjects were offered the Chrome extension, and 1,838 installed the Chrome extension and kept it for the duration of the study.

 

There are several important differences between this study and the Bail et al. Twitter experiment. The population was a broad cross-section of Facebook users not directly screened on social media use or political affiliation. The cross-ideological content came from large news outlets rather than Twitter opinion leaders and may therefore have been more moderate or of wider appeal. Finally, in addition to measuring polarization on issue views the author also measured affective polarization – the extent to which respondents felt warmly or coldly toward those on the opposite side.

 

The first set of results show that the treatment indeed changed the mix of news that subjects saw in their news feed and also the mix of news they consumed. Those in the pro-attitudinal treatment group (i.e., assigned to like news outlets aligned with their ideology) saw 67 additional posts from the assigned outlets in their feeds. Those in the counter-attitudinal treatment group (i.e., assigned to like news outlets of the opposite ideology) saw 31 additional posts. The counter-attitudinal group was induced to make between 1 and 2 additional visits to the assigned outlets on average over the course of the study. The treatment also produced a detectable change in the composition of posts subjects shared.

 

The second set of results show the impact on polarization. Consistent with the Bail et al. study, there is no evidence that injecting content from the other side into subjects’ feeds reduced the polarization of their issue views. There is also no evidence of backlash effects; the result is a precisely estimated zero.

 

Most strikingly, exposure to the counter-attitudinal treatment significantly reduced affective polarization relative to the pro-attitudinal treatment. Subjects felt relatively less “cold” toward the other party and reported that they found it easier to see the other side’s perspective. The magnitude of these effects is small in absolute terms (a few points on a 100-point “thermometer” scale) but moderately large relative to both the changes over time in affective polarization and the impact of other interventions.

 

Discussion

 

What do we learn from these studies taken together? One finding consistent across the studies is that relatively small interventions can meaningfully change the mix of content people are exposed to. The effects are small as a share of the total content flowing through peoples’ feeds, but they are sufficient to produced detectable effects on survey outcomes. This supports the view that even modest interventions have the potential to make a significant difference.

 

Both studies suggest that diversifying the content people are exposed to is unlikely to be sufficient to narrow polarization of issue views, and the Bail et al. study provides a significant note of caution that poorly conceived interventions may produce the opposite of the intended effect. While the study does not have enough power to unpack exactly what caused the backlash, one might infer that showing highly partisan opinion content from one side to strongly engaged partisans on the other side may be an especially risky approach.

 

The Levy findings on affective polarization, on the other hand, provide the most unambiguous piece of good news for the value of escaping bubbles. One possibility is that the interventions in both studies had this effect and it would have been detectable in the Bail et al. study had the authors measured affective polarization. Another possibility is that the news outlets in the Levy study were particularly conducive to helping those on each side see the others’ perspective. Either way, it is encouraging that a relatively low-cost and scalable intervention could produce a meaningful reduction in hostility.

 

Illustration: People using a magnet to attract positive feedback

References:

1. - See Republic.com by Sunstein and The Filter Bubble by Pariser.
2. - Gentzkow and Shapiro (2011); Allcott & Gentzkow (2017); Boxell et al. (2017).
3. - Boxell et al. (2017); Allcott et al. (forthcoming).
4. - See, for example, “How to Fix Facebook? We Asked 9 Experts” (The New York Times, October 31, 2017).
5. - See references cited in Bail et al. (2018).

 

 

The preceding is republished on TAP with permission by its author, Professor Matthew Gentzkow, and by the Toulouse Network for Information Technology (TNIT). “Life Outside the Bubble” was originally published in TNIT’s April 2020 newsletter.


Share