Allocate Marginal Reviews to Borderline Papers Using LLM Comparative Ranking
arXiv:2602.06078v1 Announce Type: new
Abstract: This paper argues that large ML conferences should allocate marginal review capacity primarily to papers near the acceptance boundary, rather than spreading extra reviews via random or affinity-driven heuristics. We propose using LLM-based comparative ranking (via pairwise comparisons and a Bradley–Terry model) to identify a borderline band emph{before} human reviewing and to allocate emph{marginal} reviewer capacity at assignment time. Concretely, given a venue-specific minimum review target (e.g., 3 or 4), we use this signal to decide which papers receive one additional review (e.g., a 4th or 5th), without conditioning on any human reviews and without using LLM outputs for accept/reject. We provide a simple expected-impact calculation in terms of (i) the overlap between the predicted and true borderline sets ($rho$) and (ii) the incremental value of an extra review near the boundary ($Delta$), and we provide retrospective proxies to estimate these quantities.