CAST: Achieving Stable LLM-based Text Analysis for Data Analytics

arXiv:2602.15861v1 Announce Type: new
Abstract: Text analysis of tabular data relies on two core operations: emph{summarization} for corpus-level theme extraction and emph{tagging} for row-level labeling. A critical limitation of employing large language models (LLMs) for these tasks is their inability to meet the high standards of output stability demanded by data analytics. To address this challenge, we introduce textbf{CAST} (textbf{C}onsistency via textbf{A}lgorithmic Prompting and textbf{S}table textbf{T}hinking), a framework that enhances output stability by constraining the model’s latent reasoning path. CAST combines (i) Algorithmic Prompting to impose a procedural scaffold over valid reasoning transitions and (ii) Thinking-before-Speaking to enforce explicit intermediate commitments before final generation. To measure progress, we introduce textbf{CAST-S} and textbf{CAST-T}, stability metrics for bulleted summarization and tagging, and validate their alignment with human judgments. Experiments across publicly available benchmarks on multiple LLM backbones show that CAST consistently achieves the best stability among all baselines, improving Stability Score by up to 16.2%, while maintaining or improving output quality.

Liked Liked