When Error Metrics Contradict: Clarifying RMSE and MAE in Machine Learning Evaluation
Root mean squared error (RMSE) and mean absolute error (MAE) are among the most widely used performance metrics in machine learning and scientific modeling. Although their mathematical relationship is well established, misunderstandings and misapplications of these metrics continue to appear in the literature. This technical note revisits the fundamental bounds relating RMSE and MAE and identifies a systematic error in a recently published paper in Artificial Intelligence Review, in which RMSE values are numerically smaller than the corresponding MAE values, a relationship that is mathematically impossible. Notably, these incorrect RMSE and MAE values are reported alongside other cited results within the same study that correctly satisfy the inequality RMSE greater than or equal to MAE. In addition, supplementary experiments using two common and straightforward machine learning models, Random Forest and XGBoost, demonstrate that comparable or superior performance can be achieved in several of the same datasets used in the aforementioned paper without resorting to highly complex optimization frameworks. Collectively, these findings underscore the importance of verifying the correctness of basic performance metrics and of contextualizing claimed performance gains through transparent baseline comparisons in machine learning evaluation.