The Benefit of Collective Intelligence in Community-Based Content Moderation is Limited by Overt Political Signalling

arXiv:2601.22201v1 Announce Type: new
Abstract: Social media platforms face increasing scrutiny over the rapid spread of misinformation. In response, many have adopted community-based content moderation systems, including Community Notes (formerly Birdwatch) on X (formerly Twitter), Footnotes on TikTok, and Facebook’s Community Notes initiative. However, research shows that the current design of these systems can allow political biases to influence both the development of notes and the rating processes, reducing their overall effectiveness. We hypothesize that enabling users to collaborate on writing notes, rather than relying solely on individually authored notes, can enhance their overall quality. To test this idea, we conducted an online experiment in which participants jointly authored notes on political posts. Our results show that teams produce notes that are rated as more helpful than individually written notes. We also find that politically diverse teams perform better when evaluating Republican posts, while group composition does not affect perceived note quality for Democrat posts. However, the advantage of collaboration diminishes when team members are aware of one another’s political affiliations. Taken together, these findings underscore the complexity of community-based content moderation and highlight the importance of understanding group dynamics and political diversity when designing more effective moderation systems.

Liked Liked