Adaptive Privacy Budgeting

arXiv:2601.10866v1 Announce Type: new
Abstract: We study the problem of adaptive privacy budgeting under generalized differential privacy. Consider the setting where each user $iin [n]$ holds a tuple $x_iin U:=U_1times dotsb times U_T$, where $x_i(l)in U_l$ represents the $l$-th component of their data. For every $lin [T]$ (or a subset), an untrusted analyst wishes to compute some $f_l(x_1(l),dots,x_n(l))$, while respecting the privacy of each user. For many functions $f_l$, data from the users are not all equally important, and there is potential to use the privacy budgets of the users strategically, leading to privacy savings that can be used to improve the utility of later queries. In particular, the budgeting should be adaptive to the outputs of previous queries, so that greater savings can be achieved on more typical instances. In this paper, we provide such an adaptive budgeting framework, with various applications demonstrating its applicability.

Liked Liked