IEEE journal on selected areas in information theory最新文献

筛选
英文 中文
Information Velocity of Cascaded Gaussian Channels With Feedback 带反馈的级联高斯信道的信息速度
IEEE journal on selected areas in information theory Pub Date : 2024-06-18 DOI: 10.1109/JSAIT.2024.3416310
Elad Domanovitz;Anatoly Khina;Tal Philosof;Yuval Kochman
{"title":"Information Velocity of Cascaded Gaussian Channels With Feedback","authors":"Elad Domanovitz;Anatoly Khina;Tal Philosof;Yuval Kochman","doi":"10.1109/JSAIT.2024.3416310","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3416310","url":null,"abstract":"We consider a line network of nodes, connected by additive white noise channels, equipped with local feedback. We study the velocity at which information spreads over this network. For transmission of a data packet, we give an explicit positive lower bound on the velocity, for any packet size. Furthermore, we consider streaming, that is, transmission of data packets generated at a given average arrival rate. We show that a positive velocity exists as long as the arrival rate is below the individual Gaussian channel capacity, and provide an explicit lower bound. Our analysis involves applying pulse-amplitude modulation to the data (successively in the streaming case), and using linear mean-squared error estimation at the network nodes. For general white noise, we derive exponential error-probability bounds. For single-packet transmission over channels with (sub-)Gaussian noise, we show a doubly-exponential behavior, which reduces to the celebrated Schalkwijk–Kailath scheme when considering a single node. Viewing the constellation as an “analog source”, we also provide bounds on the exponential decay of the mean-squared error of source transmission over the network.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"554-569"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Distributed Source Coding 神经分布式源编码
IEEE journal on selected areas in information theory Pub Date : 2024-06-14 DOI: 10.1109/JSAIT.2024.3412976
Jay Whang;Alliot Nagle;Anish Acharya;Hyeji Kim;Alexandros G. Dimakis
{"title":"Neural Distributed Source Coding","authors":"Jay Whang;Alliot Nagle;Anish Acharya;Hyeji Kim;Alexandros G. Dimakis","doi":"10.1109/JSAIT.2024.3412976","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3412976","url":null,"abstract":"We consider the Distributed Source Coding (DSC) problem concerning the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. This seminal result was later extended to lossy compression of distributed sources by Wyner, Ziv, Berger, and Tung. While there is vast prior work on this topic, practical DSC has been limited to synthetic datasets and specific correlation structures. Here we present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source modeling, our method utilizes a conditional Vector-Quantized Variational auto-encoder (VQ-VAE) to learn the distributed encoder and decoder. We evaluate our method on multiple datasets and show that our method can handle complex correlations and achieves state-of-the-art PSNR.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"493-508"},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Source Coding Resilient Against Compromised Users via an Access Structure 安全源代码编码通过访问结构抵御受攻击用户的攻击
IEEE journal on selected areas in information theory Pub Date : 2024-06-10 DOI: 10.1109/JSAIT.2024.3410235
Hassan ZivariFard;Rémi A. Chou
{"title":"Secure Source Coding Resilient Against Compromised Users via an Access Structure","authors":"Hassan ZivariFard;Rémi A. Chou","doi":"10.1109/JSAIT.2024.3410235","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3410235","url":null,"abstract":"Consider a source and multiple users who observe the independent and identically distributed (i.i.d.) copies of correlated Gaussian random variables. The source wishes to compress its observations and store the result in a public database such that (i) authorized sets of users are able to reconstruct the source with a certain distortion level, and (ii) information leakage to non-authorized sets of colluding users is minimized. In other words, the recovery of the source is restricted to a predefined access structure. The main result of this paper is a closed-form characterization of the fundamental trade-off between the source coding rate and the information leakage rate. As an example, threshold access structures are studied, i.e., the case where any set of at least \u0000<italic>t</i>\u0000 users is able to reconstruct the source with some predefined distortion level and the information leakage at any set of users with a size smaller than \u0000<italic>t</i>\u0000 is minimized.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"478-492"},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information-Theoretic Tools to Understand Distributed Source Coding in Neuroscience 理解神经科学中分布式源编码的信息论工具
IEEE journal on selected areas in information theory Pub Date : 2024-06-10 DOI: 10.1109/JSAIT.2024.3409683
Ariel K. Feldman;Praveen Venkatesh;Douglas J. Weber;Pulkit Grover
{"title":"Information-Theoretic Tools to Understand Distributed Source Coding in Neuroscience","authors":"Ariel K. Feldman;Praveen Venkatesh;Douglas J. Weber;Pulkit Grover","doi":"10.1109/JSAIT.2024.3409683","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3409683","url":null,"abstract":"This paper brings together topics of two of Berger’s main contributions to information theory: distributed source coding, and living information theory. Our goal is to understand which information theory techniques can be helpful in understanding a distributed source coding strategy used by the natural world. Towards this goal, we study the example of the encoding of location of an animal by grid cells in its brain. We use information measures of partial information decomposition (PID) to assess the unique, redundant, and synergistic information carried by multiple grid cells, first for simulated grid cells utilizing known encodings, and subsequently for data from real grid cells. In all cases, we make simplifying assumptions so we can assess the consistency of specific PID definitions with intuition. Our results suggest that the measure of PID proposed by Bertschinger et al. (Entropy, 2014) provides intuitive insights on distributed source coding by grid cells, and can be used for subsequent studies for understanding grid-cell encoding as well as broadly in neuroscience.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"509-519"},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Fundamental Limit of Distributed Learning With Interchangable Constrained Statistics 论可互换约束统计分布式学习的基本极限
IEEE journal on selected areas in information theory Pub Date : 2024-06-04 DOI: 10.1109/JSAIT.2024.3409426
Xinyi Tong;Jian Xu;Shao-Lun Huang
{"title":"On the Fundamental Limit of Distributed Learning With Interchangable Constrained Statistics","authors":"Xinyi Tong;Jian Xu;Shao-Lun Huang","doi":"10.1109/JSAIT.2024.3409426","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3409426","url":null,"abstract":"In the popular federated learning scenarios, distributed nodes often represent and exchange information through functions or statistics of data, with communicative processes constrained by the dimensionality of transmitted information. This paper investigates the fundamental limits of distributed parameter estimation and model training problems under such constraints. Specifically, we assume that each node can observe a sequence of i.i.d. sampled data and communicate statistics of the observed data with dimensionality constraints. We first show the Cramer-Rao lower bound (CRLB) and the corresponding achievable estimators for the distributed parameter estimation problems, and the geometric insights and the computable algorithms of designing efficient estimators are also presented. Moreover, we consider model parameters training for distributed nodes with limited communicable statistics. We demonstrate that in order to optimize the excess risk, the feature functions of the statistics shall be designed along the largest eigenvectors of a matrix induced by the model training loss function. In summary, our results potentially provide theoretical guidelines of designing efficient algorithms for enhancing the performance of distributed learning systems.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"396-406"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141624095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightVeriFL: A Lightweight and Verifiable Secure Aggregation for Federated Learning LightVeriFL:用于联合学习的轻量级可验证安全聚合系统
IEEE journal on selected areas in information theory Pub Date : 2024-04-29 DOI: 10.1109/JSAIT.2024.3391849
Baturalp Buyukates;Jinhyun So;Hessam Mahdavifar;Salman Avestimehr
{"title":"LightVeriFL: A Lightweight and Verifiable Secure Aggregation for Federated Learning","authors":"Baturalp Buyukates;Jinhyun So;Hessam Mahdavifar;Salman Avestimehr","doi":"10.1109/JSAIT.2024.3391849","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3391849","url":null,"abstract":"Secure aggregation protects the local models of the users in federated learning, by not allowing the server to obtain any information beyond the aggregate model at each iteration. Naively implementing secure aggregation fails to protect the integrity of the aggregate model in the possible presence of a malicious server forging the aggregation result, which motivates verifiable aggregation in federated learning. Existing verifiable aggregation schemes either have a linear complexity in model size or require time-consuming reconstruction at the server, that is quadratic in the number of users, in case of likely user dropouts. To overcome these limitations, we propose \u0000<monospace>LightVeriFL</monospace>\u0000, a lightweight and communication-efficient secure verifiable aggregation protocol, that provides the same guarantees for verifiability against a malicious server, data privacy, and dropout-resilience as the state-of-the-art protocols without incurring substantial communication and computation overheads. The proposed \u0000<monospace>LightVeriFL</monospace>\u0000 protocol utilizes homomorphic hash and commitment functions of constant length, that are independent of the model size, to enable verification at the users. In case of dropouts, \u0000<monospace>LightVeriFL</monospace>\u0000 uses a one-shot aggregate hash recovery of the dropped-out users, instead of a one-by-one recovery, making the verification process significantly faster than the existing approaches. Comprehensive experiments show the advantage of \u0000<monospace>LightVeriFL</monospace>\u0000 in practical settings.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"285-301"},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Algorithm Generalization Error Bounds via Auxiliary Distributions 通过辅助分布实现学习算法泛化误差边界
IEEE journal on selected areas in information theory Pub Date : 2024-04-25 DOI: 10.1109/JSAIT.2024.3391900
Gholamali Aminian;Saeed Masiha;Laura Toni;Miguel R. D. Rodrigues
{"title":"Learning Algorithm Generalization Error Bounds via Auxiliary Distributions","authors":"Gholamali Aminian;Saeed Masiha;Laura Toni;Miguel R. D. Rodrigues","doi":"10.1109/JSAIT.2024.3391900","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3391900","url":null,"abstract":"Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-Jensen-Shannon, \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-Rényi \u0000<inline-formula> <tex-math>$(0lt alpha lt 1)$ </tex-math></inline-formula>\u0000 information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-Jensen-Shannon or \u0000<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>\u0000-Rényi divergence between the distribution of test and training data samples distributions. We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"273-284"},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Distributed Compressor Discovers Binning 神经分布式压缩器发现分选功能
IEEE journal on selected areas in information theory Pub Date : 2024-04-24 DOI: 10.1109/JSAIT.2024.3393429
Ezgi Ozyilkan;Johannes Ballé;Elza Erkip
{"title":"Neural Distributed Compressor Discovers Binning","authors":"Ezgi Ozyilkan;Johannes Ballé;Elza Erkip","doi":"10.1109/JSAIT.2024.3393429","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3393429","url":null,"abstract":"We consider lossy compression of an information source when the decoder has lossless access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special case of distributed source coding. To this day, practical approaches for the Wyner-Ziv problem have neither been fully developed nor heavily investigated. We propose a data-driven method based on machine learning that leverages the universal function approximation capability of artificial neural networks. We find that our neural network-based compression scheme, based on variational vector quantization, recovers some principles of the optimum theoretical solution of the Wyner-Ziv setup, such as binning in the source space as well as optimal combination of the quantization index and side information, for exemplary sources. These behaviors emerge although no structure exploiting knowledge of the source distributions was imposed. Binning is a widely used tool in information theoretic proofs and methods, and to our knowledge, this is the first time it has been explicitly observed to emerge from data-driven learning.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"246-260"},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140949055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Generative Models From Privatized Data via Entropic Optimal Transport 通过熵优化传输从私有化数据中训练生成模型
IEEE journal on selected areas in information theory Pub Date : 2024-04-16 DOI: 10.1109/JSAIT.2024.3387463
Daria Reshetova;Wei-Ning Chen;Ayfer Özgür
{"title":"Training Generative Models From Privatized Data via Entropic Optimal Transport","authors":"Daria Reshetova;Wei-Ning Chen;Ayfer Özgür","doi":"10.1109/JSAIT.2024.3387463","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3387463","url":null,"abstract":"Local differential privacy is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data. We show that entropic regularization of optimal transport – a popular regularization method in the literature that has often been leveraged for its computational benefits – enables the generator to learn the raw (unprivatized) data distribution even though it only has access to privatized samples. We prove that at the same time this leads to fast statistical convergence at the parametric rate. This shows that entropic regularization of optimal transport uniquely enables the mitigation of both the effects of privatization noise and the curse of dimensionality in statistical convergence. We provide experimental evidence to support the efficacy of our framework in practice.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"221-235"},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140820376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentially Private Stochastic Linear Bandits: (Almost) for Free 差分私有随机线性强盗:(几乎)免费
IEEE journal on selected areas in information theory Pub Date : 2024-04-16 DOI: 10.1109/JSAIT.2024.3389954
Osama Hanna;Antonious M. Girgis;Christina Fragouli;Suhas Diggavi
{"title":"Differentially Private Stochastic Linear Bandits: (Almost) for Free","authors":"Osama Hanna;Antonious M. Girgis;Christina Fragouli;Suhas Diggavi","doi":"10.1109/JSAIT.2024.3389954","DOIUrl":"https://doi.org/10.1109/JSAIT.2024.3389954","url":null,"abstract":"In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of \u0000<inline-formula> <tex-math>$tilde {O}left({sqrt {T}+{}frac {1}{varepsilon }}right)$ </tex-math></inline-formula>\u0000 matching the known lower bound for private linear bandits, while the best previously known algorithm achieves \u0000<inline-formula> <tex-math>$tilde {O}left({{}frac {1}{varepsilon }sqrt {T}}right)$ </tex-math></inline-formula>\u0000. In the local case, we achieve a regret of \u0000<inline-formula> <tex-math>$tilde {O}left({{}frac {1}{varepsilon }{sqrt {T}}}right)$ </tex-math></inline-formula>\u0000 which matches the non-private regret for constant \u0000<inline-formula> <tex-math>$varepsilon $ </tex-math></inline-formula>\u0000, but suffers a regret penalty when \u0000<inline-formula> <tex-math>$varepsilon $ </tex-math></inline-formula>\u0000 is small. In the shuffled model, we also achieve regret of \u0000<inline-formula> <tex-math>$tilde {O}left({sqrt {T}+{}frac {1}{varepsilon }}right)$ </tex-math></inline-formula>\u0000 while the best previously known algorithm suffers a regret of \u0000<inline-formula> <tex-math>$tilde {O}left({{}frac {1}{varepsilon }{T^{3/5}}}right)$ </tex-math></inline-formula>\u0000. Our numerical evaluation validates our theoretical results. Our results generalize for contextual linear bandits with known context distributions.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"135-147"},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140818731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信