Beyond Accuracy: A Unified Random Matrix Theory Diagnostic Framework for Crash Classification Models

arXiv:2602.19528v1 Announce Type: cross
Abstract: Crash classification models in transportation safety are typically evaluated using accuracy, F1, or AUC, metrics that cannot reveal whether a model is silently overfitting. We introduce a spectral diagnostic framework grounded in Random Matrix Theory (RMT) and Heavy-Tailed Self-Regularization (HTSR) that spans the ML taxonomy: weight matrices for BERT/ALBERT/Qwen2.5, out-of-fold increment matrices for XGBoost/Random Forest, empirical Hessians for Logistic Regression, induced affinity matrices for Decision Trees, and Graph Laplacians for KNN. Evaluating nine model families on two Iowa DOT crash classification tasks (173,512 and 371,062 records respectively), we find that the power-law exponent $alpha$ provides a structural quality signal: well-regularized models consistently yield $alpha$ within $[2, 4]$ (mean $2.87 pm 0.34$), while overfit variants show $alpha < 2$ or spectral collapse. We observe a strong rank correlation between $alpha$ and expert agreement (Spearman $rho = 0.89$, $p < 0.001$), suggesting spectral quality captures model behaviors aligned with expert reasoning. We propose an $alpha$-based early stopping criterion and a spectral model selection protocol, and validate both against cross-validated F1 baselines. Sparse Lanczos approximations make the framework scalable to large datasets.

Liked Liked