The End of Human Judgment in the Kill Chain? Relocating Initiative and Interpretation with Agentic AI

arXiv:2604.06300v1 Announce Type: new
Abstract: Large language model-based agents are increasingly being integrated into core battlefield functions, including intelligence analysis, data fusion, and battlefield management. This paper argues that the very features that make such agents operationally attractive, namely their capacity for initiative, interpretation, their goal-directedness, and dynamic memory, are the same features that render context-appropriate human judgment and control substantively ineffectual in those parts of the kill chain where agents operate. Drawing on specific use cases, the paper argues that by relocating initiative and interpretation, LLM-based agents displace human decision-making in ways that makes their use incompatible with the requirement of human judgment and control which is central to existing governance frameworks, like those proposed by the GGE-CCW and REAIM. The paper concludes that a subset of agentic AI applications, particularly those deployed for data fusion and battle management in lethal contexts, cannot be used justifiably on the battlefield under current and foreseeable conditions, and proposes two ways for the international governance community to respond to this challenge.

Liked Liked