On the Optimal Message Size in PIR Under Arbitrary Collusion Patterns
Dornadula Guru Shwadeep,Manikya Pant,Gowtham Raghunath Kurri,Prasad Krishnan
International Symposium on Information Theory, ISIT, 2026
Abs | | bib Tex
@inproceedings{bib_On_t_2026, AUTHOR = {Shwadeep, Dornadula Guru and Pant, Manikya and Kurri, Gowtham Raghunath and Krishnan, Prasad }, TITLE = {On the Optimal Message Size in PIR Under Arbitrary Collusion Patterns}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2026}}
A private information retrieval protocol (PIR) scheme under an arbitrary collusion pattern (mc{P}) enables a client to retrieve one message from a library of (K) equal-sized messages duplicated in (N) servers, while keeping the index of the desired message private from any colluding set in (mc{P}). The efficiency of a PIR protocol is measured by its rate, defined as the ratio of the message size to the total download cost, and the supremum of all the achievable rates is the capacity of PIR. Although achieving high rates typically requires sufficiently large message sizes, smaller message sizes also desirable due to reduced implementation complexity and fewer constraints. By characterizing the capacity-achieving schemes, Tian, Sun, and Chen (2019) showed that the optimal message size for uniformly decomposable PIR schemes under no-collusion setting is (N-1). However, comparable results are not yet available for more general collusion settings.
In this work, we present a complete characterization of the properties of capacity-achieving decomposable PIR schemes under arbitrary collusion patterns. Building on this characterization, We derive a general lower bound on the optimal message size for capacity-achieving uniformly decomposable PIR schemes under an arbitrary collusion pattern (mc{P}), expressed in terms of the hitting number of a newly defined family of subsets of servers determined by the collusion pattern (mc{P}). Finally, we specialize the lower bound to several important classes of collusion patterns, including (T)-collusion, disjoint collections of colluding sets, cyclically (T)-contiguous collusion, and disjoint collections of cyclically contiguous colluding sets. For the last two collusion patterns, we present matching achievable schemes that attain the corresponding bounds, thereby providing a complete characterization of the optimal message size under these collusion patterns.
Converse Bounds for Sun-Jafar-type Weak PIR under Mutual Information Leakage
Chandan Anand,Jayesh Seshadri,Prasad Krishnan,Gowtham Raghunath Kurri
International Symposium on Information Theory, ISIT, 2026
Abs | | bib Tex
@inproceedings{bib_Conv_2026, AUTHOR = {Anand, Chandan and Seshadri, Jayesh and Krishnan, Prasad and Kurri, Gowtham Raghunath }, TITLE = {Converse Bounds for Sun-Jafar-type Weak PIR under Mutual Information Leakage}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2026}}
Building on the well-established capacity-achieving schemes of Sun-Jafar (for replicated storage) and the closely related scheme of Banawan-Ulukus (for MDS-coded setting), a recent work by Anand et al. proposed new classes of weak private information retrieval (WPIR) schemes for the collusion-free (replication and MDS-coded) setting, as well as for the $T$-colluding scenario. In their work, Anand et al. characterized the expressions for the rate-privacy trade-offs for these classes of WPIR schemes, under the mutual information leakage and maximal leakage metrics. Explicit achievable trade-offs for the same were also presented, which were shown to be competitive or better than prior WPIR schemes. However, the class-wise optimality of the reported trade-offs were unknown. In this work, we show that the explicit rate-privacy trade-off under the mutual information leakage metric, reported for the Sun-Jafar-type scheme by Anand et al. are class-wise optimal for the non-colluding and replicated setting. Furthermore, we prove the class-wise optimality for Banawan-Ulukus-type MDS-WPIR and Sun-Jafar-type $T$-colluding WPIR schemes, under threshold-constraints on the system parameters. When these threshold-constraints do not hold, we present counter-examples which show that even higher rates than those reported before can be achieved.
Generalized Dual Discriminator GANs
Penukonda Naga Chandana,Tejas Srivastava,Gowtham Raghunath Kurri,Lalitha Vadlamani
Information Theory Workshop, ITW, 2025
Abs | | bib Tex
@inproceedings{bib_Gene_2025, AUTHOR = {Chandana, Penukonda Naga and Srivastava, Tejas and Kurri, Gowtham Raghunath and Vadlamani, Lalitha }, TITLE = {Generalized Dual Discriminator GANs}, BOOKTITLE = {Information Theory Workshop}. YEAR = {2025}}
Dual discriminator generative adversarial networks (D2 GANs) were introduced to mitigate the problem of mode collapse in generative adversarial networks. In D2 GANs, two discriminators are employed alongside a generator: one discriminator rewards high scores for samples from the true data distribution, while the other favors samples from the generator. In this work, we first introduce {dual discriminator α-GANs (D2 α-GANs)}, which combines the strengths of dual discriminators with the flexibility of a tunable loss function, α-loss. We further generalize this approach to arbitrary functions defined on positive reals, leading to a broader class of models we refer to as generalized dual discriminator generative adversarial networks. For each of these proposed models, we provide theoretical analysis and show that the associated min-max optimization reduces to the minimization of a linear combination of an f-divergence and a {reverse} f-divergence. This generalizes the known simplification for D2-GANs, where the objective reduces to a linear combination of the KL-divergence and the reverse KL-divergence. Finally, we perform experiments on 2D synthetic data and use multiple performance metrics to capture various advantages of our GANs.
Fractional Subadditivity of Submodular Functions: Equality Conditions and Their Applications
Gunank Singh Jakhar,Gowtham Raghunath Kurri,Suryajith Chillara,Vinod M. Prabhakaran
International Symposium on Information Theory, ISIT, 2025
Abs | | bib Tex
@inproceedings{bib_Frac_2025, AUTHOR = {Jakhar, Gunank Singh and Kurri, Gowtham Raghunath and Chillara, Suryajith and Prabhakaran, Vinod M. }, TITLE = {Fractional Subadditivity of Submodular Functions: Equality Conditions and Their Applications}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2025}}
Submodular functions are known to satisfy various forms of fractional subadditivity. This work investigates the conditions for equality to hold exactly or approximately in the fractional subadditivity of submodular functions. We establish that a small gap in the inequality implies that the function is close to being modular, and that the gap is zero if and only if the function is modular. We then present natural implications of these results for special cases of submodular functions, such as entropy, relative entropy, and matroid rank. As a consequence, we characterize the necessary and sufficient conditions for equality to hold in Shearer's lemma, recovering a result of Ellis et al. (2016) as a special case. We leverage our results to propose a new multivariate mutual information, which generalizes Watanabe's total correlation (1960), Han's dual total correlation (1975), and Csiszar and Narayan's shared information (2004), and analyze its properties. Among these properties, we extend Watanabe's characterization of total correlation as the maximum correlation over partitions to fractional partitions. When applied to matrix determinantal inequalities for positive definite matrices, our results recover the equality conditions of the classical determinantal inequalities of Hadamard, Szasz, and Fischer as special cases.
Sun-Jafar-Type Schemes for Weak Private Information Retrieval
Chandan Anand,Jayesh Seshadri,Prasad Krishnan,Gowtham Raghunath Kurri
International Symposium on Information Theory, ISIT, 2025
@inproceedings{bib_Sun-_2025, AUTHOR = {Anand, Chandan and Seshadri, Jayesh and Krishnan, Prasad and Kurri, Gowtham Raghunath }, TITLE = {Sun-Jafar-Type Schemes for Weak Private Information Retrieval}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2025}}
In information-theoretic private information retrieval (PIR), a client wants to retrieve one desired file out of $M$ files, stored across $N$ servers, while keeping the index of the desired file private from each $T$-sized subset of servers. A PIR protocol must ideally maximize the rate, which is the ratio of the file size to the total quantum of the download from the servers, while ensuring such privacy. In Weak-PIR (WPIR), the criterion of perfect information-theoretic privacy is relaxed. This enables higher rates to be achieved, while some information about the desired file index leaks to the servers. This leakage is captured by various known privacy metrics. By leveraging the well-established capacity-achieving schemes of Sun and Jafar under non-colluding ($T=1$) and colluding ($1
Unifying Privacy Measures via Maximal (α, β)-Leakage (MαbeL)
Atefeh Gilani,Gowtham Raghunath Kurri,Oliver Kosut,Lalitha Sankar
IEEE Transactions on Information Theory, T-IT, 2024
Abs | | bib Tex
@inproceedings{bib_Unif_2024, AUTHOR = {Gilani, Atefeh and Kurri, Gowtham Raghunath and Kosut, Oliver and Sankar, Lalitha }, TITLE = {Unifying Privacy Measures via Maximal (α, β)-Leakage (MαbeL)}, BOOKTITLE = {IEEE Transactions on Information Theory}. YEAR = {2024}}
We introduce a family of information leakage measures called maximal (α,β) -leakage ( Mα beL), parameterized by real numbers α and β greater than or equal to 1. The measure is formalized via an operational definition involving an adversary guessing an unknown (randomized) function of the data given the released data. We obtain a simplified computable expression for the measure and show that it satisfies several basic properties such as monotonicity in β for a fixed α , non-negativity, data processing inequalities, and additivity over independent releases. We highlight the relevance of this family by showing that it bridges several known leakage measures, including maximal α -leakage (β=1) , maximal leakage (α=∞,β=1) , local differential privacy (LDP) (α=∞,β=∞) , and local Rényi differential privacy (LRDP) (α=β) , thereby giving an operational interpretation to local Rényi differential privacy. We also study a conditional version of Mα beL on leveraging which we recover differential privacy and Rényi differential privacy. A new variant of LRDP, which we call maximal Rényi leakage, appears as a special case of Mα beL for α=∞ that smoothly tunes between maximal leakage ( β=1 ) and LDP ( β=∞ ). Finally, we show that a vector form of the maximal Rényi leakage relaxes differential privacy under Gaussian and Laplacian mechanisms.
Addressing GAN Training Instabilities via Tunable Classification Losses
Monica Welfert,Gowtham Raghunath Kurri,Kyle Otstot,Lalitha Sankar
IEEE Journal on Selected Areas in Information Theory, JSAIT, 2024
Abs | | bib Tex
@inproceedings{bib_Addr_2024, AUTHOR = {Welfert, Monica and Kurri, Gowtham Raghunath and Otstot, Kyle and Sankar, Lalitha }, TITLE = {Addressing GAN Training Instabilities via Tunable Classification Losses}, BOOKTITLE = {IEEE Journal on Selected Areas in Information Theory}. YEAR = {2024}}
Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to α -GANs, defined using α -loss, a tunable CPE loss family parametrized by α∈(0,∞ ]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player’s objective using α -loss to obtain (αD,αG) -GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (αD,αG) . Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning (αD,αG) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets.
Maximal Guesswork Leakage
Gowtham Raghunath Kurri,Malhar A. Managoli,Vinod M. Prabhakaran
International Symposium on Information Theory, ISIT, 2024
Abs | | bib Tex
@inproceedings{bib_Maxi_2024, AUTHOR = {Kurri, Gowtham Raghunath and Managoli, Malhar A. and Prabhakaran, Vinod M. }, TITLE = {Maximal Guesswork Leakage}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2024}}
We study information leakage through guesswork, the minimum expected number of guesses required to guess a random variable. In particular, we define maximal guesswork leakage as the multiplicative decrease, upon observing Y, of the guesswork of a randomized function of X, maximized over all such randomized functions. We also study a pointwise form of the leakage which captures the leakage due to the release of a single realization of Y. We also study these two notions of leakage with oblivious (or memoryless) guessing. We obtain closed-form expressions for all these leakage measures, with the exception of one. Specifically, we are able to obtain closed-form expression for maximal guesswork leakage for the binary erasure source only; deriving expressions for arbitrary sources appears challenging. Some of the consequences of our results are - a connection between guesswork and differential privacy and a new operational interpretation to maximal α -leakage in terms of guesswork.
An operational approach to information leakage via generalized gain functions
Gowtham Raghunath Kurri,Lalitha Sankar,Oliver Kosut
IEEE Transactions on Information Theory, T-IT, 2023
Abs | | bib Tex
@inproceedings{bib_An_o_2023, AUTHOR = {Kurri, Gowtham Raghunath and Sankar, Lalitha and Kosut, Oliver }, TITLE = {An operational approach to information leakage via generalized gain functions}, BOOKTITLE = {IEEE Transactions on Information Theory}. YEAR = {2023}}
(alpha_D,alpha_G)-GANs: Addressing GAN Training Instabilities via Dual Objectives
Monica Welfert,Kyle Otstot,Gowtham Raghunath Kurri,Lalitha Sankar
International Symposium on Information Theory, ISIT, 2023
Abs | | bib Tex
@inproceedings{bib_(alp_2023, AUTHOR = {Welfert, Monica and Otstot, Kyle and Kurri, Gowtham Raghunath and Sankar, Lalitha }, TITLE = {(alpha_D,alpha_G)-GANs: Addressing GAN Training Instabilities via Dual Objectives}, BOOKTITLE = {International Symposium on Information Theory}. YEAR = {2023}}
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using α-loss, a tunable classification loss, to obtain (αD,αG)-GANs, parameterized by (αD,αG)∈(0,∞]2. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (αD,αG). In the finite sample and capacity setting, we define estimation error to quantify the gap in the generator's performance relative to the optimal setting with infinite samples and obtain upper bounds on this error, showing it to be order optimal under certain conditions. Finally, we highlight the value of tuning (αD,αG) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.