{"title":"Throughput and Latency Analysis for Line Networks With Outage Links","authors":"Yanyan Dong;Shenghao Yang;Jie Wang;Fan Cheng","doi":"10.1109/JSAIT.2024.3419054","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3419054","url":null,"abstract":"Wireless communication links suffer from outage events caused by fading and interference. To facilitate a tractable analysis of network communication throughput and latency, we propose an outage link model to represent a communication link in the slow fading phenomenon. For a line-topology network with outage links, we study three types of intermediate network node schemes: random linear network coding, store-and-forward, and hop-by-hop retransmission. We provide the analytical formulas for the maximum throughputs and the end-to-end latency for each scheme. To gain a more explicit understanding, we perform a scalability analysis of the throughput and latency as the network length increases. We observe that the same order of throughput/latency holds across a wide range of outage functions for each scheme. We illustrate how our exact formulae and scalability results can be applied to compare different schemes.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"464-477"},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10571545","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141966000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monica Welfert;Gowtham R. Kurri;Kyle Otstot;Lalitha Sankar
{"title":"Addressing GAN Training Instabilities via Tunable Classification Losses","authors":"Monica Welfert;Gowtham R. Kurri;Kyle Otstot;Lalitha Sankar","doi":"10.1109/JSAIT.2024.3415670","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3415670","url":null,"abstract":"Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-GANs, defined using \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-loss, a tunable CPE loss family parametrized by \u0000<inline-formula> <tex-math>$alpha in (0,infty $ </tex-math></inline-formula>\u0000]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player’s objective using \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-loss to obtain \u0000<inline-formula> <tex-math>$(alpha _{D},alpha _{G})$ </tex-math></inline-formula>\u0000-GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on \u0000<inline-formula> <tex-math>$(alpha _{D},alpha _{G})$ </tex-math></inline-formula>\u0000. Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning \u0000<inline-formula> <tex-math>$(alpha _{D},alpha _{G})$ </tex-math></inline-formula>\u0000 in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"534-553"},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long-Term Fairness in Sequential Multi-Agent Selection With Positive Reinforcement","authors":"Bhagyashree Puranik;Ozgur Guldogan;Upamanyu Madhow;Ramtin Pedarsani","doi":"10.1109/JSAIT.2024.3416078","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3416078","url":null,"abstract":"While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term. In this paper, we examine this hypothesis and its consequences in a setting in which multiple agents are selecting from a common pool of applicants. We propose the Multi-agent Fair-Greedy policy, that balances greedy score maximization and fairness. Under this policy, we prove that the resource pool and the admissions converge to a long-term fairness target set by the agents when the score distributions across the groups in the population are identical. We provide empirical evidence of existence of equilibria under non-identical score distributions through synthetic and adapted real-world datasets. We then sound a cautionary note for more complex applicant pool evolution models, under which uncoordinated behavior by the agents can cause negative reinforcement, leading to a reduction in the fraction of under-represented applicants. Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model, with a number of open issues that remain to be explored by algorithm designers, social scientists, and policymakers.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"424-441"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141624148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controlled Privacy Leakage Propagation Throughout Overlapping Grouped Learning","authors":"Shahrzad Kiani;Franziska Boenisch;Stark C. Draper","doi":"10.1109/JSAIT.2024.3416089","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3416089","url":null,"abstract":"Federated Learning (FL) is the standard protocol for collaborative learning. In FL, multiple workers jointly train a shared model. They exchange model updates calculated on their data, while keeping the raw data itself local. Since workers naturally form groups based on common interests and privacy policies, we are motivated to extend standard FL to reflect a setting with multiple, potentially overlapping groups. In this setup where workers can belong and contribute to more than one group at a time, complexities arise in understanding privacy leakage and in adhering to privacy policies. To address the challenges, we propose differential private overlapping grouped learning (DP-OGL), a novel method to implement privacy guarantees within overlapping groups. Under the honest-but-curious threat model, we derive novel privacy guarantees between arbitrary pairs of workers. These privacy guarantees describe and quantify two key effects of privacy leakage in DP-OGL: propagation delay, i.e., the fact that information from one group will leak to other groups only with temporal offset through the common workers and information degradation, i.e., the fact that noise addition over model updates limits information leakage between workers. Our experiments show that applying DP-OGL enhances utility while maintaining strong privacy compared to standard FL setups.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"442-463"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141624126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information Velocity of Cascaded Gaussian Channels With Feedback","authors":"Elad Domanovitz;Anatoly Khina;Tal Philosof;Yuval Kochman","doi":"10.1109/JSAIT.2024.3416310","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3416310","url":null,"abstract":"We consider a line network of nodes, connected by additive white noise channels, equipped with local feedback. We study the velocity at which information spreads over this network. For transmission of a data packet, we give an explicit positive lower bound on the velocity, for any packet size. Furthermore, we consider streaming, that is, transmission of data packets generated at a given average arrival rate. We show that a positive velocity exists as long as the arrival rate is below the individual Gaussian channel capacity, and provide an explicit lower bound. Our analysis involves applying pulse-amplitude modulation to the data (successively in the streaming case), and using linear mean-squared error estimation at the network nodes. For general white noise, we derive exponential error-probability bounds. For single-packet transmission over channels with (sub-)Gaussian noise, we show a doubly-exponential behavior, which reduces to the celebrated Schalkwijk–Kailath scheme when considering a single node. Viewing the constellation as an “analog source”, we also provide bounds on the exponential decay of the mean-squared error of source transmission over the network.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"554-569"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jay Whang;Alliot Nagle;Anish Acharya;Hyeji Kim;Alexandros G. Dimakis
{"title":"Neural Distributed Source Coding","authors":"Jay Whang;Alliot Nagle;Anish Acharya;Hyeji Kim;Alexandros G. Dimakis","doi":"10.1109/JSAIT.2024.3412976","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3412976","url":null,"abstract":"We consider the Distributed Source Coding (DSC) problem concerning the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. This seminal result was later extended to lossy compression of distributed sources by Wyner, Ziv, Berger, and Tung. While there is vast prior work on this topic, practical DSC has been limited to synthetic datasets and specific correlation structures. Here we present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source modeling, our method utilizes a conditional Vector-Quantized Variational auto-encoder (VQ-VAE) to learn the distributed encoder and decoder. We evaluate our method on multiple datasets and show that our method can handle complex correlations and achieves state-of-the-art PSNR.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"493-508"},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Secure Source Coding Resilient Against Compromised Users via an Access Structure","authors":"Hassan ZivariFard;Rémi A. Chou","doi":"10.1109/JSAIT.2024.3410235","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3410235","url":null,"abstract":"Consider a source and multiple users who observe the independent and identically distributed (i.i.d.) copies of correlated Gaussian random variables. The source wishes to compress its observations and store the result in a public database such that (i) authorized sets of users are able to reconstruct the source with a certain distortion level, and (ii) information leakage to non-authorized sets of colluding users is minimized. In other words, the recovery of the source is restricted to a predefined access structure. The main result of this paper is a closed-form characterization of the fundamental trade-off between the source coding rate and the information leakage rate. As an example, threshold access structures are studied, i.e., the case where any set of at least \u0000<italic>t</i>\u0000 users is able to reconstruct the source with some predefined distortion level and the information leakage at any set of users with a size smaller than \u0000<italic>t</i>\u0000 is minimized.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"478-492"},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ariel K. Feldman;Praveen Venkatesh;Douglas J. Weber;Pulkit Grover
{"title":"Information-Theoretic Tools to Understand Distributed Source Coding in Neuroscience","authors":"Ariel K. Feldman;Praveen Venkatesh;Douglas J. Weber;Pulkit Grover","doi":"10.1109/JSAIT.2024.3409683","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3409683","url":null,"abstract":"This paper brings together topics of two of Berger’s main contributions to information theory: distributed source coding, and living information theory. Our goal is to understand which information theory techniques can be helpful in understanding a distributed source coding strategy used by the natural world. Towards this goal, we study the example of the encoding of location of an animal by grid cells in its brain. We use information measures of partial information decomposition (PID) to assess the unique, redundant, and synergistic information carried by multiple grid cells, first for simulated grid cells utilizing known encodings, and subsequently for data from real grid cells. In all cases, we make simplifying assumptions so we can assess the consistency of specific PID definitions with intuition. Our results suggest that the measure of PID proposed by Bertschinger et al. (Entropy, 2014) provides intuitive insights on distributed source coding by grid cells, and can be used for subsequent studies for understanding grid-cell encoding as well as broadly in neuroscience.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"509-519"},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Fundamental Limit of Distributed Learning With Interchangable Constrained Statistics","authors":"Xinyi Tong;Jian Xu;Shao-Lun Huang","doi":"10.1109/JSAIT.2024.3409426","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3409426","url":null,"abstract":"In the popular federated learning scenarios, distributed nodes often represent and exchange information through functions or statistics of data, with communicative processes constrained by the dimensionality of transmitted information. This paper investigates the fundamental limits of distributed parameter estimation and model training problems under such constraints. Specifically, we assume that each node can observe a sequence of i.i.d. sampled data and communicate statistics of the observed data with dimensionality constraints. We first show the Cramer-Rao lower bound (CRLB) and the corresponding achievable estimators for the distributed parameter estimation problems, and the geometric insights and the computable algorithms of designing efficient estimators are also presented. Moreover, we consider model parameters training for distributed nodes with limited communicable statistics. We demonstrate that in order to optimize the excess risk, the feature functions of the statistics shall be designed along the largest eigenvectors of a matrix induced by the model training loss function. In summary, our results potentially provide theoretical guidelines of designing efficient algorithms for enhancing the performance of distributed learning systems.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"396-406"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141624095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LightVeriFL: A Lightweight and Verifiable Secure Aggregation for Federated Learning","authors":"Baturalp Buyukates;Jinhyun So;Hessam Mahdavifar;Salman Avestimehr","doi":"10.1109/JSAIT.2024.3391849","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3391849","url":null,"abstract":"Secure aggregation protects the local models of the users in federated learning, by not allowing the server to obtain any information beyond the aggregate model at each iteration. Naively implementing secure aggregation fails to protect the integrity of the aggregate model in the possible presence of a malicious server forging the aggregation result, which motivates verifiable aggregation in federated learning. Existing verifiable aggregation schemes either have a linear complexity in model size or require time-consuming reconstruction at the server, that is quadratic in the number of users, in case of likely user dropouts. To overcome these limitations, we propose \u0000<monospace>LightVeriFL</monospace>\u0000, a lightweight and communication-efficient secure verifiable aggregation protocol, that provides the same guarantees for verifiability against a malicious server, data privacy, and dropout-resilience as the state-of-the-art protocols without incurring substantial communication and computation overheads. The proposed \u0000<monospace>LightVeriFL</monospace>\u0000 protocol utilizes homomorphic hash and commitment functions of constant length, that are independent of the model size, to enable verification at the users. In case of dropouts, \u0000<monospace>LightVeriFL</monospace>\u0000 uses a one-shot aggregate hash recovery of the dropped-out users, instead of a one-by-one recovery, making the verification process significantly faster than the existing approaches. Comprehensive experiments show the advantage of \u0000<monospace>LightVeriFL</monospace>\u0000 in practical settings.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"285-301"},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}