Near-Optimal Parallel Approximate Counting via Sampling

arXiv:2604.01263v1 Announce Type: new
Abstract: The computational equivalence between approximate counting and sampling is well established for polynomial-time algorithms. The most efficient general reduction from counting to sampling is achieved via simulated annealing, where the counting problem is formulated in terms of estimating the ratio $Q={Z(beta_{max})}/{Z(beta_{min})}$ between partition functions $Z(beta)=sum_{xin Omega} exp(beta H(x))$ of Gibbs distributions $mu_beta$ over $Omega$ with Hamiltonian $H$, given access to a sampling oracle that produces samples from $mu_beta$ for $beta in [beta_{min}, beta_{max}]$.
The best bound achieved by known annealing algorithms with relative error $varepsilon$ is $O(q log h / varepsilon^2)$, where $q, h$ are parameters which respectively bound $ln Q$ and $H$. However, all known algorithms attaining this near-optimal complexity are inherently sequential, or *adaptive*: the queried parameters $beta$ depend on previous samples.
We develop a simple non-adaptive algorithm for approximate counting using $O(q log^2 h / varepsilon^2)$ samples, as well as an algorithm that achieves $O(q log h / varepsilon^2)$ samples with just two rounds of adaptivity, matching the best sample complexity of sequential algorithms. These algorithms naturally give rise to work-efficient parallel (RNC) counting algorithms.
We discuss applications to RNC counting algorithms for several classic models, including the anti-ferromagnetic 2-spin, monomer-dimer and ferromagnetic Ising models.

Liked Liked