Tight Inapproximability for Welfare-Maximizing Autobidding Equilibria

arXiv:2602.09110v1 Announce Type: new
Abstract: We examine the complexity of computing welfare- and revenue-maximizing equilibria in autobidding second-price auctions subject to return-on-spend (RoS) constraints. We show that computing an autobidding equilibrium that approximates the welfare-optimal one within a factor of $2 – epsilon$ is NP-hard for any constant $epsilon > 0$. Moreover, deciding whether there exists an autobidding equilibrium that attains a $1/2 + epsilon$ fraction of the optimal welfare — unfettered by equilibrium constraints — is NP-hard for any constant $epsilon > 0$. This hardness result is tight in view of the fact that the price of anarchy (PoA) is at most $2$, and shows that deciding whether a non-trivial autobidding equilibrium exists — one that is even marginally better than the worst-case guarantee — is intractable. For revenue, we establish a stronger logarithmic inapproximability, while under the projection games conjecture, our reduction rules out even a polynomial approximation factor. These results significantly strengthen the APX-hardness of Li and Tang (AAAI ’24). Furthermore, we refine our reduction in the presence of ML advice concerning the buyers’ valuations, revealing again a close connection between the inapproximability threshold and PoA bounds. Finally, we examine relaxed notions of equilibrium attained by simple learning algorithms, establishing constant inapproximability for both revenue and welfare.

Liked Liked