Online Budget Allocation with Censored Semi-Bandit Feedback

arXiv:2508.05844v2 Announce Type: replace-cross
Abstract: We study a stochastic budget-allocation problem over $K$ tasks. At each round $t$, the learner chooses an allocation $X_t in Delta_K$. Task $k$ succeeds with probability $F_k(X_{t,k})$, where $F_1,dots,F_K$ are nondecreasing budget-to-success curves, and upon success yields a random reward with unknown mean $mu_k$. The learner observes which tasks succeed, and observes a task’s reward only upon success (censored semi-bandit feedback). This model captures, for instance, splitting payments across crowdsourcing workers or distributing bids across simultaneous auctions, and subsumes stochastic multi-armed bandits and semi-bandits.
We design an optimism-based algorithm that operates under censored semi-bandit feedback. Our main result shows that in diminishing-returns regimes, the regret of this algorithm scales polylogarithmically with the horizon $T$ without any ad hoc tuning. For general nondecreasing curves, we prove that the same algorithm (with the same tuning) achieves a worst-case regret upper bound of $tilde O(Ksqrt{T})$. Finally, we establish a matching worst-case regret lower bound of $Omega(Ksqrt{T})$ that holds even for full-feedback algorithms, highlighting the intrinsic hardness of our problem outside diminishing returns.

Liked Liked