Pacing Opinion Polarization via Graph Reinforcement Learning

arXiv:2602.23390v1 Announce Type: new
Abstract: Opinion polarization in online social networks poses serious risks to social cohesion and democratic processes. Recent studies formulate polarization moderation as algorithmic intervention problems under opinion dynamics models, especially the Friedkin–Johnsen (FJ) model. However, most existing methods are tailored to specific linear settings and rely on closed-form steady-state analysis, limiting scalability, flexibility, and applicability to cost-aware, nonlinear, or topology-altering interventions.
We propose PACIFIER, a graph reinforcement learning framework for sequential polarization moderation via network interventions. PACIFIER reformulates the canonical ModerateInternal (MI) and ModerateExpressed (ME) problems as sequential decision-making tasks, enabling adaptive intervention policies without repeated steady-state recomputation. The framework is objective-agnostic and extends naturally to FJ-consistent settings, including budget-aware interventions, continuous internal opinions, biased-assimilation dynamics, and node removal. Extensive experiments on real-world networks demonstrate strong performance and scalability across diverse moderation scenarios.

Liked Liked