Draft-Refine-Optimize: Self-Evolved Learning for Natural Language to MongoDB Query Generation

arXiv:2604.13045v1 Announce Type: new
Abstract: Natural Language to MongoDB Query Language (NL2MQL) is essential for democratizing access to modern document-centric databases. Unlike Text-to-SQL, NL2MQL faces unique challenges from MQL’s procedural aggregation pipelines, deeply nested schemas, and ambiguous value grounding. Existing approaches use static prompting or one-shot refinement, which inadequately model these complex contexts and fail to systematically leverage execution feedback for persistent improvement. We propose EvoMQL, a self-evolved framework that unifies evidence-grounded context construction with execution-driven learning through iterative Draft-Refine-Optimize (DRO) cycles. Each cycle uses draft queries to trigger query-aware retrieval, dynamically building compact evidence contexts that resolve schema ambiguities and ground nested paths to concrete values. The model then undergoes online policy optimization with execution-based rewards and curriculum scheduling, with refined models feeding back into subsequent cycles for progressive evolution. Overall, EvoMQL achieves state-of-the-art execution accuracy of 76.6% on the EAI in-distribution benchmark and 83.1% on the TEND out-of-distribution benchmark, outperforming the strongest open-source baselines by up to 9.5% and 5.2%, respectively. With only 3B activated parameters, this closed-loop paradigm enables scalable, continuous improvement of NL2MQL systems in production.

Liked Liked