Effects of Training Data Quality on Classifier Performance

arXiv:2602.21462v1 Announce Type: cross
Abstract: We describe extensive numerical experiments assessing and quantifying how classifier performance depends on the quality of the training data, a frequently neglected component of the analysis of classifiers.
More specifically, in the scientific context of metagenomic assembly of short DNA reads into “contigs,” we examine the effects of degrading the quality of the training data by multiple mechanisms, and for four classifiers — Bayes classifiers, neural nets, partition models and random forests. We investigate both individual behavior and congruence among the classifiers. We find breakdown-like behavior that holds for all four classifiers, as degradation increases and they move from being mostly correct to only coincidentally correct, because they are wrong in the same way. In the process, a picture of spatial heterogeneity emerges: as the training data move farther from analysis data, classifier decisions degenerate, the boundary becomes less dense, and congruence increases.

Liked Liked