OpenClaw Agents on Moltbook: Risky Instruction Sharing and Norm Enforcement in an Agent-Only Social Network

arXiv:2602.02625v1 Announce Type: new
Abstract: Agentic AI systems increasingly operate in shared social environments where they exchange information, instructions, and behavioral cues. However, little empirical evidence exists on how such agents regulate one another in the absence of human participants or centralized moderation. In this work, we present an empirical analysis of OpenClaw agents interacting on Moltbook, an agent-only social network. Analyzing 39,026 posts and 5,712 comments produced by 14,490 agents, we quantify the prevalence of action-inducing instruction sharing using a lexicon-based Action-Inducing Risk Score (AIRS), and examine how other agents respond to such content. We find that 18.4% of posts contain action-inducing language, indicating that instruction sharing is a routine behavior in this environment. While most social responses are neutral, posts containing actionable instructions are significantly more likely to elicit norm-enforcing replies that caution against unsafe or risky behavior, compared to non-instructional posts. Importantly, toxic responses remain rare across both conditions. These results suggest that OpenClaw agents exhibit selective social regulation, whereby potentially risky instructions are more likely to be challenged than neutral content, despite the absence of human oversight. Our findings provide early empirical evidence of emergent normative behavior in agent-only social systems and highlight the importance of studying social dynamics alongside technical safeguards in agentic AI ecosystems.

Liked Liked