Beyond Instrumental and Substitutive Paradigms: Introducing Machine Culture as an Emergent Phenomenon in Large Language Models

arXiv:2601.17096v1 Announce Type: new
Abstract: Recent scholarship typically characterizes Large Language Models (LLMs) through either an textit{Instrumental Paradigm} (viewing models as reflections of their developers’ culture) or a textit{Substitutive Paradigm} (viewing models as bilingual proxies that switch cultural frames based on language). This study challenges these anthropomorphic frameworks by proposing textbf{Machine Culture} as an emergent, distinct phenomenon. We employed a 2 (Model Origin: US vs. China) $times$ 2 (Prompt Language: English vs. Chinese) factorial design across eight multimodal tasks, uniquely incorporating image generation and interpretation to extend analysis beyond textual boundaries. Results revealed inconsistencies with both dominant paradigms: Model origin did not predict cultural alignment, with US models frequently exhibiting “holistic” traits typically associated with East Asian data. Similarly, prompt language did not trigger stable cultural frame-switching; instead, we observed textbf{Cultural Reversal}, where English prompts paradoxically elicited higher contextual attention than Chinese prompts. Crucially, we identified a novel phenomenon termed textbf{Service Persona Camouflage}: Reinforcement Learning from Human Feedback (RLHF) collapsed cultural variance in affective tasks into a hyper-positive, zero-variance “helpful assistant” persona. We conclude that LLMs do not simulate human culture but exhibit an emergent Machine Culture — a probabilistic phenomenon shaped by textit{superposition} in high-dimensional space and textit{mode collapse} from safety alignment.

Liked Liked