ACADEMIC ARTICLE SUMMARY

Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support

Article Source: Proceedings of the ACM on Human-Computer Interaction, Vol. 6, No. CSCW2, Article 370, 2022
Publication Date:
Time to Read: 2 minute read
Written By:

CS

Charlotte Schluger

 Cristian Danescu-Niculescu-Mizil

Cristian Danescu-Niculescu-Mizil

 Jonathan P. Chang

Jonathan P. Chang

ARTICLE SUMMARY

Summary:

Human moderators may intervene to stop users from posting offensive content, but proactive intervention is labor intensive. Algorithmic tools help moderators identify problematic conversations on a large scale.

POLICY RELEVANCE

Policy Relevance:

Automated content moderation can help moderators identify at-risk conversations quickly.

KEY TAKEAWAYS

Key Takeaways:
  • Online platforms’ human moderators limit postings of anti-social content using three methods.
    • The most common method is reactive (ex-post) moderation, where moderators remove already-posted antisocial content or sanction authors.
    • Another moderation method is prescreening, that is, requiring moderators’ approval before user-created content is published.
    • A third option is proactive moderation, where moderators discourage posting anti-social content before it appears.
  • Reactive moderation does not stop offensive content from appearing, and exposes the moderators to stress; prescreening is labor-intensive, and prevents users from interacting in real time.
  • Currently, Wikipedia Talk Pages’ moderators intervene proactively by intervening in conversations to keep them on track; these efforts are labor-intensive and hard to sustain on a large scale.
  • Wikipedia moderators must strike a balance between maintaining civil discourse and avoiding measures that alienate creators of valuable content.
  • This case study explores use of algorithms to support Wikipedia moderators in proactive moderation; researchers developed a prototype tool that used conversational forecasting to identify conversations likely to deteriorate into uncivil discourse.
  • Moderators found the prototype's ranking system helpful in quickly identifying at-risk conversations; algorithmic tools would enable proactive moderation on a larger scale.
  • Automated moderation raises difficult ethical questions.
    • The flaws of conversational forecasting tools are not well documented.
    • Use of automated tools to augment moderation could have unforeseen consequences.

QUOTE

TAGS

 Karen Levy

About Karen Levy

Karen Levy is an Associate Professor in the Department of Information Science at Cornell University, associate member of the faculty of Cornell Law School, and field faculty in Sociology, Science and Technology Studies, Media Studies, and Data Science. Professor Levy researches the legal, organizational, social, and ethical aspects of data-intensive technologies. Her work explores what happens when we use digital technologies to enforce rules and make decisions about people, particularly in contexts marked by conditions of inequality.