Self-Evolving Machine Learning Models via Meta-Learning and Neural Architecture Search
Despite recent advances in artificial intelligence, static deep learning models still struggle in non-stationary real-world environments because of concept drift. This paper presents a framework for Self-Evolving Machine Learning Models (SE-MLM) that combines the rapid adaptability of meta-learning with the structural flexibility of Neural Architecture Search (NAS). Unlike train-once approaches that require manual retraining afterdrift our framework enables the model to update itself through a bi-level optimization process: an inner loop adapts weights using meta-gradients, and an outer loop refines the architecture through a continuous relaxation of the search space. Experiments on CIFAR-10, CIFAR-100, and Rotated-MNIST show that SE-MLM recovers up to 98% of baseline performance within minutes of a drift event and consistently outperforms static base- lines. We also discuss practical applications in healthcare monitoring and high-frequency trading, along with future directions in “Green AI” and explainability.