Internal Emotional Intelligence in AI Systems: An I-Center Framework for Human-Interpretable System States
The field of affective computing has largely focused on enabling artificial intelligence to recognize and respond to human emotions. This has created a fundamental asymmetry: AI interprets the user’s state while its own internal state remains a black box, undermining trust and collaboration. Here, we introduce the ‘I-Center’, a computational framework for artificial introspection that allows an AI system to monitor its own operational processes and articulate its state through an emotionally grounded model. The I-Center translates core performance metrics—such as processing latency, prediction confidence, and input unexpectedness—into a dynamic affective state within a psychological valence-arousal framework. This enables the AI to communicate its operational well-being, from ‘content’ during optimal function to ‘stressed’ during performance degradation. The AI’s affective state is modulated not only by its internal performance but also by contextual cues from user input, enabling a form of artificial empathy. We demonstrate an example of a functional prototype where this introspective capability creates a transparent, dynamic communication channel during computations. Statistical properties of the prototype are examined on simulated inputs of different types. This model represents a paradigm shift from AI that merely senses emotion to AI that expresses its operational state emotionally, paving the way for a perspective of a more intuitive, trustworthy, and collaborative human-AI partnerships in fields ranging from healthcare to autonomous systems.