What Are Adversaries Doing? Automating Tactics, Techniques, and Procedures Extraction: A Systematic Review
arXiv:2604.02377v1 Announce Type: new
Abstract: Adversaries continuously evolve their tactics, techniques, and procedures (TTPs) to achieve their objectives while evading detection, requiring defenders to continually update their understanding of adversary behavior. Prior research has proposed automated extraction of TTP-related intelligence from unstructured text and mapping it to structured knowledge bases, such as MITRE ATT&CK. However, existing work varies widely in extraction objectives, datasets, modeling approaches, and evaluation practices, making it difficult to understand the research landscape. The goal of this study is to aid security researchers in understanding the state of the art in extracting attack tactics, techniques, and procedures (TTPs) from unstructured text by analyzing relevant literature. We systematically analyze 80 peer-reviewed studies across key dimensions: extraction purposes, data sources, dataset construction, modeling approaches, evaluation metrics, and artifact availability. Our analysis reveals several dominant trends. Technique-level classification remains the dominant task formulation, while tactic classification and technique searching are underexplored. The field has progressed from rule-based and traditional machine learning to transformer-based architectures (e.g., BERT, SecureBERT, RoBERTa), with recent studies exploring LLM-based approaches including prompting, retrieval-augmented generation, and fine-tuning, though adoption remains emergent. Despite these advances, important limitations persist: many studies rely on single-label classification, limited evaluation settings, and narrow datasets, constraining cross-domain generalization. Reproducibility is further hindered by proprietary datasets, limited code releases, and restricted corpora.