Learning Structural Hardness for Combinatorial Auctions: Instance-Dependent Algorithm Selection via Graph Neural Networks

The Winner Determination Problem (WDP) in combinatorial auctions is NP-hard, and no existing method reliably predicts which instances will defeat fast greedy heuristics. The ML-for-combinatorial-optimization community has focused on learning to emph{replace} solvers, yet recent evidence shows that graph neural networks (GNNs) rarely outperform well-tuned classical methods on standard benchmarks. We pursue a different objective: learning to predict emph{when} a given instance is hard for greedy allocation, enabling instance-dependent algorithm selection. We design a 20-dimensional structural feature vector and train a lightweight MLP hardness classifier that predicts the greedy optimality gap with mean absolute error 0.033, Pearson correlation 0.937, and binary classification accuracy 94.7% across three random seeds. For instances identified as hard — those exhibiting “whale-fish” trap structure where greedy provably fails — we deploy a heterogeneous GNN specialist that achieves ${approx}0%$ optimality gap on all six adversarial configurations tested (vs. 3.75–59.24% for greedy). A hybrid allocator combining the hardness classifier with GNN and greedy solvers achieves 0.51% overall gap on mixed distributions. Our honest evaluation on CATS benchmarks confirms that GNNs do not outperform Gurobi (0.45–0.71 vs. 0.20 gap), motivating the algorithm selection framing. Learning emph{when} to deploy expensive solvers is more tractable than learning to replace them.

Liked Liked