ACADEMIC ARTICLE SUMMARY
Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support
Article Source: Proceedings of the ACM on Human-Computer Interaction, Vol. 6, No. CSCW2, Article 370, 2022
Publication Date:
Time to Read: 2 minute readARTICLE SUMMARY
Summary:
Human moderators may intervene to stop users from posting offensive content, but proactive intervention is labor intensive. Algorithmic tools help moderators identify problematic conversations on a large scale.
POLICY RELEVANCE
Policy Relevance:
Automated content moderation can help moderators identify at-risk conversations quickly.
KEY TAKEAWAYS
Key Takeaways:
- Online platforms’ human moderators limit postings of anti-social content using three methods.
- The most common method is reactive (ex-post) moderation, where moderators remove already-posted antisocial content or sanction authors.
- Another moderation method is prescreening, that is, requiring moderators’ approval before user-created content is published.
- A third option is proactive moderation, where moderators discourage posting anti-social content before it appears.
- Reactive moderation does not stop offensive content from appearing, and exposes the moderators to stress; prescreening is labor-intensive, and prevents users from interacting in real time.
- Currently, Wikipedia Talk Pages’ moderators intervene proactively by intervening in conversations to keep them on track; these efforts are labor-intensive and hard to sustain on a large scale.
- Wikipedia moderators must strike a balance between maintaining civil discourse and avoiding measures that alienate creators of valuable content.
- This case study explores use of algorithms to support Wikipedia moderators in proactive moderation; researchers developed a prototype tool that used conversational forecasting to identify conversations likely to deteriorate into uncivil discourse.
- Moderators found the prototype's ranking system helpful in quickly identifying at-risk conversations; algorithmic tools would enable proactive moderation on a larger scale.
- Automated moderation raises difficult ethical questions.
- The flaws of conversational forecasting tools are not well documented.
- Use of automated tools to augment moderation could have unforeseen consequences.