IEEE Transactions on Information Theory最新文献

筛选
英文 中文
Error Correction Decoding Algorithms of RS Codes Based on an Earlier Termination Algorithm to Find the Error Locator Polynomial
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-02-06 DOI: 10.1109/TIT.2025.3539222
Zhengyi Jiang;Hao Shi;Zhongyi Huang;Linqi Song;Bo Bai;Gong Zhang;Hanxu Hou
{"title":"Error Correction Decoding Algorithms of RS Codes Based on an Earlier Termination Algorithm to Find the Error Locator Polynomial","authors":"Zhengyi Jiang;Hao Shi;Zhongyi Huang;Linqi Song;Bo Bai;Gong Zhang;Hanxu Hou","doi":"10.1109/TIT.2025.3539222","DOIUrl":"https://doi.org/10.1109/TIT.2025.3539222","url":null,"abstract":"Reed-Solomon (RS) codes are widely used to correct errors in storage systems. Finding the error locator polynomial is one of the key steps in the error correction procedure of RS codes. Modular Approach (MA) is an effective algorithm for solving the Welch-Berlekamp (WB) key-equation problem to find the error locator polynomial that needs <inline-formula> <tex-math>$2t$ </tex-math></inline-formula> steps, where <italic>t</i> is the error correction capability. In this paper, we first present a new MA algorithm that only requires <inline-formula> <tex-math>$2e$ </tex-math></inline-formula> steps and then propose two fast decoding algorithms for RS codes based on our MA algorithm, where <italic>e</i> is the number of errors and <inline-formula> <tex-math>$eleq t$ </tex-math></inline-formula>. We propose the Improved-Frequency Domain Modular Approach (I-FDMA) algorithm that needs <inline-formula> <tex-math>$2e$ </tex-math></inline-formula> steps to solve the error locator polynomial and present our first decoding algorithm based on the I-FDMA algorithm. We show that, compared with the existing methods based on MA algorithms, our I-FDMA algorithm can effectively reduce the decoding complexity of RS codes when <inline-formula> <tex-math>$elt t$ </tex-math></inline-formula>. Furthermore, we propose the <inline-formula> <tex-math>$t_{0}$ </tex-math></inline-formula>-Shortened I-FDMA (<inline-formula> <tex-math>$t_{0}$ </tex-math></inline-formula>-SI-FDMA) algorithm (<inline-formula> <tex-math>$t_{0}$ </tex-math></inline-formula> is a predetermined even number less than <inline-formula> <tex-math>$2t-1$ </tex-math></inline-formula>) based on the new termination mechanism to solve the error number <italic>e</i> quickly. We propose our second decoding algorithm based on the SI-FDMA algorithm for RS codes and show that the multiplication complexity of our second decoding algorithm is lower than our first decoding algorithm (the I-FDMA decoding algorithm) when <inline-formula> <tex-math>$2elt t_{0}+1$ </tex-math></inline-formula>.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2564-2575"},"PeriodicalIF":2.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Price of Decentralization in Decentralized Detection
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-02-04 DOI: 10.1109/TIT.2025.3538468
Bruce Huang;I-Hsiang Wang
{"title":"On the Price of Decentralization in Decentralized Detection","authors":"Bruce Huang;I-Hsiang Wang","doi":"10.1109/TIT.2025.3538468","DOIUrl":"https://doi.org/10.1109/TIT.2025.3538468","url":null,"abstract":"Fundamental limits on the error probabilities of a family of decentralized detection algorithms (e.g., the social learning rule proposed by Lalitha et al., 2018) over directed graphs are investigated. In decentralized detection, a network of nodes locally exchanging information about the samples they observe with their neighbors to collectively infer the underlying unknown hypothesis. Each node in the network weighs the messages received from its neighbors to form its private belief and only requires knowledge of the data generating distribution of its observation. In this work, it is first shown that while the original social learning rule of Lalitha et al., 2018 achieves asymptotically vanishing error probabilities as the number of samples tends to infinity, it suffers a gap in the achievable error exponent compared to the centralized case. The gap is due to the network imbalance caused by the local weights that each node chooses to weigh the messages received from its neighbors. To close this gap, a modified learning rule is proposed and shown to achieve error exponents as large as those in the centralized setup. This implies that there is essentially no first-order penalty caused by decentralization in the exponentially decaying rate of error probabilities. To elucidate the price of decentralization, further analysis on the higher-order asymptotics of the error probability is conducted. It turns out that the price is at most a constant multiplicative factor in the error probability, equivalent to an <inline-formula> <tex-math>$o(1/t)$ </tex-math></inline-formula> additive gap in the error exponent, where <italic>t</i> is the number of samples observed by each agent in the network and the number of rounds of information exchange. This constant depends on the network connectivity and captures the level of network imbalance. Results of simulation on the error probability supporting our learning rule are shown. Further discussions and extensions of results are also presented.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2341-2359"},"PeriodicalIF":2.2,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Griesmer Type Bounds for Nonlinear Codes and Their Applications
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-02-04 DOI: 10.1109/TIT.2025.3538921
Xu Pan;Hao Chen;Hongwei Liu;Shanxiang Lyu
{"title":"Griesmer Type Bounds for Nonlinear Codes and Their Applications","authors":"Xu Pan;Hao Chen;Hongwei Liu;Shanxiang Lyu","doi":"10.1109/TIT.2025.3538921","DOIUrl":"https://doi.org/10.1109/TIT.2025.3538921","url":null,"abstract":"In this paper, we propose three Griesmer type bounds for the minimum Hamming weight of complementary codes of linear codes. Infinite families of complementary codes meeting the three Griesmer type bounds are given to show these bounds are tight. The Griesmer type bounds proposed in this paper are significantly stronger than the classical Griesmer bound for linear codes. As a by-product, we construct some optimal few-weight codes and determine their weight distributions. As an application, Griesmer type bounds for the column distance of convolutional codes are presented. These Griesmer type bounds are stronger than the Singleton bound for convolutional codes.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2550-2563"},"PeriodicalIF":2.2,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “Reed Solomon Codes Against Adversarial Insertions and Deletions”
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-02-03 DOI: 10.1109/TIT.2025.3538114
Roni Con;Amir Shpilka;Itzhak Tamo
{"title":"Corrections to “Reed Solomon Codes Against Adversarial Insertions and Deletions”","authors":"Roni Con;Amir Shpilka;Itzhak Tamo","doi":"10.1109/TIT.2025.3538114","DOIUrl":"https://doi.org/10.1109/TIT.2025.3538114","url":null,"abstract":"The purpose of this note is to correct an error made by Con et al. (2023), specifically in the proof of Theorem 9. Here we correct the proof but as a consequence we get a slightly weaker result. In Theorem9, we claimed that for integers <italic>k</i> and <italic>n</i> such that <inline-formula> <tex-math>$k lt n/9$ </tex-math></inline-formula>, there exists an <inline-formula> <tex-math>$[n,k]_{q}$ </tex-math></inline-formula> RS code that can decode from <inline-formula> <tex-math>$n-2k+1$ </tex-math></inline-formula> insdel errors where <inline-formula> <tex-math>$q = Oleft ({{k^{5} left ({{ frac {en}{k-1} }}right)^{4k-4}}}right)$ </tex-math></inline-formula>. Here we prove the following. <italic>Theorem 1:</i> For integers <italic>n</i> and <inline-formula> <tex-math>$k lt n/9$ </tex-math></inline-formula>, there exists an <inline-formula> <tex-math>$[n,k]_{q}$ </tex-math></inline-formula> RS-code, where <inline-formula> <tex-math>$q=Oleft ({{k^{4} cdot left ({{frac {4en}{4k-3}}}right)^{4k-3}}}right)$ </tex-math></inline-formula> is a prime power, that can decode from <inline-formula> <tex-math>$n - 2k + 1$ </tex-math></inline-formula> adversarial insdel errors. Note that the exponent of <italic>n</i> is <inline-formula> <tex-math>$4k-3$ </tex-math></inline-formula> whereas in Theorem 9 it is <inline-formula> <tex-math>$4k-4$ </tex-math></inline-formula>. For constant dimensional codes, the field size is of order <inline-formula> <tex-math>$O(n^{4k-3})$ </tex-math></inline-formula>, and in particular, for <inline-formula> <tex-math>$k=2$ </tex-math></inline-formula> the field size is of order <inline-formula> <tex-math>$O(n^{5})$ </tex-math></inline-formula>.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"3237-3238"},"PeriodicalIF":2.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Deep Sobolev Regression: Estimation and Variable Selection by ReQU Neural Network
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-31 DOI: 10.1109/TIT.2025.3537594
Zhao Ding;Chenguang Duan;Yuling Jiao;Jerry Zhijian Yang
{"title":"Semi-Supervised Deep Sobolev Regression: Estimation and Variable Selection by ReQU Neural Network","authors":"Zhao Ding;Chenguang Duan;Yuling Jiao;Jerry Zhijian Yang","doi":"10.1109/TIT.2025.3537594","DOIUrl":"https://doi.org/10.1109/TIT.2025.3537594","url":null,"abstract":"We propose SDORE, a <underline>s</u>emi-supervised <underline>d</u>eep S<underline>o</u>bolev <underline>re</u>gressor, for the nonparametric estimation of the underlying regression function and its gradient. SDORE employs deep ReQU neural networks to minimize the empirical risk with gradient norm regularization, allowing the approximation of the regularization term by unlabeled data. Our study includes a thorough analysis of the convergence rates of SDORE in <inline-formula> <tex-math>$L^{2}$ </tex-math></inline-formula>-norm, achieving the minimax optimality. Further, we establish a convergence rate for the associated plug-in gradient estimator, even in the presence of significant domain shift. These theoretical findings offer valuable insights for selecting regularization parameters and determining the size of the neural network, while showcasing the provable advantage of leveraging unlabeled data in semi-supervised learning. To the best of our knowledge, SDORE is the first provable neural network-based approach that simultaneously estimates the regression function and its gradient, with diverse applications such as nonparametric variable selection. The effectiveness of SDORE is validated through an extensive range of numerical simulations.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2955-2981"},"PeriodicalIF":2.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radon-Hurwitz Grassmannian Codes
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-29 DOI: 10.1109/TIT.2025.3536324
Matthew Fickus;Enrique Gomez-Leos;Joseph W. Iverson
{"title":"Radon-Hurwitz Grassmannian Codes","authors":"Matthew Fickus;Enrique Gomez-Leos;Joseph W. Iverson","doi":"10.1109/TIT.2025.3536324","DOIUrl":"https://doi.org/10.1109/TIT.2025.3536324","url":null,"abstract":"Every equi-isoclinic tight fusion frame (EITFF) is a type of optimal code in a Grassmannian, consisting of subspaces of a finite-dimensional Hilbert space for which the smallest principal angle between any pair of them is as large as possible. EITFFs yield dictionaries with minimal block coherence and so are ideal for certain types of compressed sensing. By refining classical work of Lemmens and Seidel based on Radon-Hurwitz theory, we fully characterize EITFFs in the special case where the dimension of the subspaces is exactly one-half of that of the ambient space. We moreover show that each such “Radon-Hurwitz EITFF” is highly symmetric, where every even permutation is an automorphism.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"3203-3213"},"PeriodicalIF":2.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Distributed Clustering With Redundant Data Assignment
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-29 DOI: 10.1109/TIT.2025.3536323
Saikiran Bulusu;Venkata Gandikota;Arya Mazumdar;Ankit Singh Rawat;Pramod K. Varshney
{"title":"Robust Distributed Clustering With Redundant Data Assignment","authors":"Saikiran Bulusu;Venkata Gandikota;Arya Mazumdar;Ankit Singh Rawat;Pramod K. Varshney","doi":"10.1109/TIT.2025.3536323","DOIUrl":"https://doi.org/10.1109/TIT.2025.3536323","url":null,"abstract":"In this work, we present distributed clustering algorithms that can handle large-scale data across multiple machines in the presence of faulty machines. These faulty machines can either be straggling machines that fail to respond within a stipulated time or Byzantines that send arbitrary responses. We propose redundant data assignment schemes that enable us to obtain clustering solutions based on the entire dataset, even when some machines are stragglers or adversarial in nature. Our proposed robust clustering algorithms generate a constant factor approximate solution in the presence of stragglers or Byzantines. We also provide various constructions of the data assignment scheme that provide resilience against a large fraction of faulty machines. Simulation results show that the distributed algorithms based on the proposed assignment scheme provide good-quality solutions for a variety of clustering problems.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2888-2908"},"PeriodicalIF":2.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-29 DOI: 10.1109/TIT.2025.3535923
Song Mei;Yuchen Wu
{"title":"Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models","authors":"Song Mei;Yuchen Wu","doi":"10.1109/TIT.2025.3535923","DOIUrl":"https://doi.org/10.1109/TIT.2025.3535923","url":null,"abstract":"We investigate the efficiency of deep neural networks for approximating scoring functions in diffusion-based generative modeling. While existing approximation theories leverage the smoothness of score functions, they suffer from the curse of dimensionality for intrinsically high-dimensional data. This limitation is pronounced in graphical models such as Markov random fields, where the approximation efficiency of score functions remains unestablished. To address this, we note score functions can often be well-approximated in graphical models through variational inference denoising algorithms. Furthermore, these algorithms can be efficiently represented by neural networks. We demonstrate this through examples, including Ising models, conditional Ising models, restricted Boltzmann machines, and sparse encoding models. Combined with off-the-shelf discretization error bounds for diffusion-based sampling, we provide an efficient sample complexity bound for diffusion-based generative modeling when the score function is learned by deep neural networks.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2930-2954"},"PeriodicalIF":2.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Bounds and Degree-Distribution Optimization of Finite-Length BATS Codes
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-29 DOI: 10.1109/TIT.2025.3536295
Mingyang Zhu;Shenghao Yang;Ming Jiang;Chunming Zhao
{"title":"Performance Bounds and Degree-Distribution Optimization of Finite-Length BATS Codes","authors":"Mingyang Zhu;Shenghao Yang;Ming Jiang;Chunming Zhao","doi":"10.1109/TIT.2025.3536295","DOIUrl":"https://doi.org/10.1109/TIT.2025.3536295","url":null,"abstract":"Batched sparse (BATS) codes were proposed as a reliable communication solution for networks with packet loss. In the finite-length regime, the error probability of BATS codes under belief propagation (BP) decoding has been studied in the literature and can be analyzed by recursive formulae. However, all existing analyses have not considered precoding or have treated the BATS code and the precode as two separate entities. In this paper, we analyze the word-wise error probability of finite-length BATS codes with a precode under joint decoding, including BP decoding and maximum-likelihood (ML) decoding. The joint BP decoder performs peeling decoding on a joint Tanner graph constructed from both the BATS and the precode Tanner graphs, and the joint ML decoder solves a single linear system with all linear constraints implied by the BATS code and the precode. We derive closed-form upper bounds on the error probability for both decoders. Specifically, low-density parity-check (LDPC) precodes are used for BP decoding, and any generic precode can be used for ML decoding. Even for BATS codes without a precode, the derived upper bound for BP decoding is more accurate than the approximate recursive formula, and easier to compute than the exact recursive formula. The accuracy of the two upper bounds has been verified by many simulation results. Based on the two upper bounds, we formulate an optimization problem to optimize the degree distribution of LDPC-precoded BATS codes, which improves BP performance, ML performance, or both. In our experiments, to transmit 128 packets over a line network with packet loss, the optimized LDPC-precoded BATS codes reduce the transmission overhead to less than 50% of that of standard BATS codes under comparable decoding complexity constraints.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"2452-2481"},"PeriodicalIF":2.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Strong Secrecy for Multiple Access Channels With States and Causal CSI
IF 2.2 3区 计算机科学
IEEE Transactions on Information Theory Pub Date : 2025-01-29 DOI: 10.1109/TIT.2025.3536017
Yiqi Chen;Tobias J. Oechtering;Mikael Skoglund;Yuan Luo
{"title":"On Strong Secrecy for Multiple Access Channels With States and Causal CSI","authors":"Yiqi Chen;Tobias J. Oechtering;Mikael Skoglund;Yuan Luo","doi":"10.1109/TIT.2025.3536017","DOIUrl":"https://doi.org/10.1109/TIT.2025.3536017","url":null,"abstract":"Strong secrecy communication over a discrete memoryless state-dependent multiple access channel (SD-MAC) with an external eavesdropper is investigated. The channel is governed by discrete memoryless and i.i.d. channel states, and the channel state information (CSI) is revealed to the encoders in a causal manner. The main results of this paper are inner and outer bounds of the capacity region, for which we investigate coding schemes incorporating wiretap coding and secret key agreements between the sender and the legitimate receiver. Two kinds of block Markov coding schemes are proposed. The first is a new coding scheme that uses backward decoding and the Wyner-Ziv coding, and the secret key is constructed from a lossy description of the CSI. The other is an extended version of an existing coding scheme for point-to-point wiretap channels with causal CSI. A numerical example shows that the achievable region given by the first coding scheme can be strictly larger than the second one. However, these two schemes do not outperform each other in general, and there exist some numerical examples in which each coding scheme achieves some rate pairs that cannot be achieved by another scheme. Our established inner bound reduces to some best-known results in the literature as special cases. We further investigate some capacity-achieving cases for state-dependent multiple access wiretap channels (SD-MAWCs) with degraded message sets. It turns out that the two coding schemes are both optimal in these cases.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 4","pages":"3070-3099"},"PeriodicalIF":2.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10857409","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信