Unifying Privacy Measures via Maximal (α, β)-Leakage (MαbeL)
Atefeh Gilani,Gowtham Raghunath Kurri,Oliver Kosut,Lalitha Sankar
IEEE Transactions on Information Theory, T-IT, 2024
@inproceedings{bib_Unif_2024, AUTHOR = {Atefeh Gilani, Gowtham Raghunath Kurri, Oliver Kosut, Lalitha Sankar}, TITLE = {Unifying Privacy Measures via Maximal (α, β)-Leakage (MαbeL)}, BOOKTITLE = {IEEE Transactions on Information Theory}. YEAR = {2024}}
We introduce a family of information leakage measures called maximal (α,β) -leakage ( Mα beL), parameterized by real numbers α and β greater than or equal to 1. The measure is formalized via an operational definition involving an adversary guessing an unknown (randomized) function of the data given the released data. We obtain a simplified computable expression for the measure and show that it satisfies several basic properties such as monotonicity in β for a fixed α , non-negativity, data processing inequalities, and additivity over independent releases. We highlight the relevance of this family by showing that it bridges several known leakage measures, including maximal α -leakage (β=1) , maximal leakage (α=∞,β=1) , local differential privacy (LDP) (α=∞,β=∞) , and local Rényi differential privacy (LRDP) (α=β) , thereby giving an operational interpretation to local Rényi differential privacy. We also study a conditional version of Mα beL on leveraging which we recover differential privacy and Rényi differential privacy. A new variant of LRDP, which we call maximal Rényi leakage, appears as a special case of Mα beL for α=∞ that smoothly tunes between maximal leakage ( β=1 ) and LDP ( β=∞ ). Finally, we show that a vector form of the maximal Rényi leakage relaxes differential privacy under Gaussian and Laplacian mechanisms.
Addressing GAN Training Instabilities via Tunable Classification Losses
Monica Welfert,Gowtham Raghunath Kurri,Kyle Otstot,Lalitha Sankar
IEEE Journal on Selected Areas in Information Theory, JSAIT, 2024
@inproceedings{bib_Addr_2024, AUTHOR = {Monica Welfert, Gowtham Raghunath Kurri, Kyle Otstot, Lalitha Sankar}, TITLE = {Addressing GAN Training Instabilities via Tunable Classification Losses}, BOOKTITLE = {IEEE Journal on Selected Areas in Information Theory}. YEAR = {2024}}
Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to α -GANs, defined using α -loss, a tunable CPE loss family parametrized by α∈(0,∞ ]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player’s objective using α -loss to obtain (αD,αG) -GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (αD,αG) . Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning (αD,αG) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets.
Maximal Guesswork Leakage
Gowtham Raghunath Kurri,Malhar A. Managoli,Vinod M. Prabhakaran
International Symposium on Information Theory, ISIT, 2024
@inproceedings{bib_Maxi_2024, AUTHOR = {Gowtham Raghunath Kurri, Malhar A. Managoli, Vinod M. Prabhakaran}, TITLE = {Maximal Guesswork Leakage}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2024}}
We study information leakage through guesswork, the minimum expected number of guesses required to guess a random variable. In particular, we define maximal guesswork leakage as the multiplicative decrease, upon observing Y, of the guesswork of a randomized function of X, maximized over all such randomized functions. We also study a pointwise form of the leakage which captures the leakage due to the release of a single realization of Y. We also study these two notions of leakage with oblivious (or memoryless) guessing. We obtain closed-form expressions for all these leakage measures, with the exception of one. Specifically, we are able to obtain closed-form expression for maximal guesswork leakage for the binary erasure source only; deriving expressions for arbitrary sources appears challenging. Some of the consequences of our results are - a connection between guesswork and differential privacy and a new operational interpretation to maximal α -leakage in terms of guesswork.
An operational approach to information leakage via generalized gain functions
Gowtham Raghunath Kurri,Lalitha Sankar,Oliver Kosut
IEEE Transactions on Information Theory, T-IT, 2023
@inproceedings{bib_An_o_2023, AUTHOR = {Gowtham Raghunath Kurri, Lalitha Sankar, Oliver Kosut}, TITLE = {An operational approach to information leakage via generalized gain functions}, BOOKTITLE = {IEEE Transactions on Information Theory}. YEAR = {2023}}
(alpha_D,alpha_G)-GANs: Addressing GAN Training Instabilities via Dual Objectives
Monica Welfert,Kyle Otstot,Gowtham Raghunath Kurri,Lalitha Sankar
International Symposium on Information Theory, ISIT, 2023
@inproceedings{bib_(alp_2023, AUTHOR = {Monica Welfert, Kyle Otstot, Gowtham Raghunath Kurri, Lalitha Sankar}, TITLE = {(alpha_D,alpha_G)-GANs: Addressing GAN Training Instabilities via Dual Objectives}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2023}}
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using α-loss, a tunable classification loss, to obtain (αD,αG)-GANs, parameterized by (αD,αG)∈(0,∞]2. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (αD,αG). In the finite sample and capacity setting, we define estimation error to quantify the gap in the generator's performance relative to the optimal setting with infinite samples and obtain upper bounds on this error, showing it to be order optimal under certain conditions. Finally, we highlight the value of tuning (αD,αG) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.