IEEE Transactions on Signal Processing最新文献

筛选
英文 中文
An Investigation of Using Rigid Body Receivers for Locating a Non-Cooperative Object by Pseudo-Ranges in the Absence of Synchronization
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-27 DOI: 10.1109/TSP.2025.3535099
Xiaochuan Ke;K. C. Ho
{"title":"An Investigation of Using Rigid Body Receivers for Locating a Non-Cooperative Object by Pseudo-Ranges in the Absence of Synchronization","authors":"Xiaochuan Ke;K. C. Ho","doi":"10.1109/TSP.2025.3535099","DOIUrl":"10.1109/TSP.2025.3535099","url":null,"abstract":"A traditional receiver has only one sensor to observe the signal from an object for localization. This research investigates the extension of a receiver to a rigid body (RB) that has several sensors attached to its different spots for non-cooperative localization. In addition to the position uncertainties of RB receivers as in a typical wireless sensing network, their orientations may not be known. We show that using RB receivers relieves the stringent requirement of synchronization among them for locating a non-cooperative object by time observations, even without knowledge about the orientations of the RBs. The minimum number of illuminators, RB receivers and sensors, as well as the geometry requirement to achieve localization are established, for the cases of without and with the availability of inaccurate orientations. The optimum placements of the sensors within an RB receiver and the RBs in the localization space are derived to achieve the A-optimality for the object location estimation under Gaussian noise. Simulation results confirm well the developed theories.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"970-987"},"PeriodicalIF":4.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143050365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subspace Constrained Variational Bayesian Inference for Structured Compressive Sensing With a Dynamic Grid
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-24 DOI: 10.1109/TSP.2025.3532953
An Liu;Yufan Zhou;Wenkang Xu
{"title":"Subspace Constrained Variational Bayesian Inference for Structured Compressive Sensing With a Dynamic Grid","authors":"An Liu;Yufan Zhou;Wenkang Xu","doi":"10.1109/TSP.2025.3532953","DOIUrl":"10.1109/TSP.2025.3532953","url":null,"abstract":"We investigate the problem of recovering a structured sparse signal from a linear observation model with an uncertain dynamic grid in the sensing matrix. The state-of-the-art expectation maximization based compressed sensing (EM-CS) methods, such as turbo compressed sensing (Turbo-CS) and turbo variational Bayesian inference (Turbo-VBI), have a relatively slow convergence speed due to the double-loop iterations between the E-step and M-step. Moreover, each inner iteration in the E-step involves a high-dimensional matrix inverse in general, which is unacceptable for problems with large signal dimensions or real-time calculation requirements. Although there are some attempts to avoid the high-dimensional matrix inverse by majorization minimization, the convergence speed and accuracy are often sacrificed. To better address this problem, we propose an alternating estimation framework based on a novel subspace constrained VBI (SC-VBI) method, in which the high-dimensional matrix inverse is replaced by a low-dimensional subspace constrained matrix inverse (with the dimension equal to the sparsity level). We further prove the convergence of the SC-VBI to a stationary solution of the Kullback-Leibler divergence minimization problem. Simulations demonstrate that the proposed SC-VBI algorithm can achieve a much better tradeoff between complexity per iteration, convergence speed, and performance compared to the state-of-the-art algorithms.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"781-794"},"PeriodicalIF":4.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effectiveness of Local Updates for Decentralized Learning Under Data Heterogeneity
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-24 DOI: 10.1109/TSP.2025.3533208
Tongle Wu;Zhize Li;Ying Sun
{"title":"The Effectiveness of Local Updates for Decentralized Learning Under Data Heterogeneity","authors":"Tongle Wu;Zhize Li;Ying Sun","doi":"10.1109/TSP.2025.3533208","DOIUrl":"10.1109/TSP.2025.3533208","url":null,"abstract":"We revisit two fundamental decentralized optimization methods, Decentralized Gradient Tracking (DGT) and Decentralized Gradient Descent (DGD), with multiple local updates. We consider two settings and demonstrate that incorporating local update steps can reduce communication complexity. Specifically, for <inline-formula><tex-math>$mu$</tex-math></inline-formula>-strongly convex and <inline-formula><tex-math>$L$</tex-math></inline-formula>-smooth loss functions, we proved that local DGT achieves communication complexity <inline-formula><tex-math>$tilde{mathcal{O}}Big{(}frac{L}{mu(K+1)}+frac{delta+{}{mu}}{mu(1-rho)}+frac{rho}{(1-rho)^{2}}cdotfrac{L+delta}{mu}Big{)}$</tex-math></inline-formula>, where <inline-formula><tex-math>$K$</tex-math></inline-formula> is the number of additional local update, <inline-formula><tex-math>$rho$</tex-math></inline-formula> measures the network connectivity and <inline-formula><tex-math>$delta$</tex-math></inline-formula> measures the second-order heterogeneity of the local losses. Our results reveal the tradeoff between communication and computation and show increasing <inline-formula><tex-math>$K$</tex-math></inline-formula> can effectively reduce communication costs when the data heterogeneity is low and the network is well-connected. We then consider the over-parameterization regime where the local losses share the same minimums. We proved that employing local updates in DGD, even without gradient correction, achieves exact linear convergence under the Polyak-Łojasiewicz (PL) condition, which can yield a similar effect as DGT in reducing communication complexity. Customization of the result to linear models is further provided, with improved rate expression. Numerical experiments validate our theoretical results.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"751-765"},"PeriodicalIF":4.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wasserstein Distributionally Robust Graph Learning via Algorithm Unrolling
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-23 DOI: 10.1109/TSP.2025.3526287
Xiang Zhang;Yinfei Xu;Mingjie Shao;Yonina C. Eldar
{"title":"Wasserstein Distributionally Robust Graph Learning via Algorithm Unrolling","authors":"Xiang Zhang;Yinfei Xu;Mingjie Shao;Yonina C. Eldar","doi":"10.1109/TSP.2025.3526287","DOIUrl":"10.1109/TSP.2025.3526287","url":null,"abstract":"In this paper, we consider inferring the underlying graph topology from smooth graph signals. Most existing approaches learn graphs by minimizing a well-designed empirical risk using the observed data, which may be prone to data uncertainty that arises from noisy measurements and limited observability. Therefore, the learned graphs may be unreliable and exhibit poor out-of-sample performance. To enhance the robustness to data uncertainty, we propose a smoothness-based graph learning framework from a distributionally robust perspective, which is equivalent to solving an <inline-formula><tex-math>$mathrm{inf-sup}$</tex-math></inline-formula> problem. However, learning graphs directly in this way is challenging since (i) the <inline-formula><tex-math>$mathrm{inf-sup}$</tex-math></inline-formula> problem is intractable, and (ii) many parameters need to be manually determined. To address these issues, we first reformulate the <inline-formula><tex-math>$mathrm{inf-sup}$</tex-math></inline-formula> problem into a tractable one, where robustness is achieved via a regularizer. Theoretically, we show that the regularizer can improve generalization of the proposed graph estimator by bounding the out-of-sample risks. We then propose an algorithm based on the ADMM framework to solve the induced problem and further unroll it into a neural network. All parameters are determined automatically and simultaneously by training the unrolled network. Extensive experiments on both synthetic and real-world data demonstrate that our approach can achieve superior and more robust performance than existing models on different observed signals.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"676-690"},"PeriodicalIF":4.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143026443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Optimal MMSE Channel Estimation for One-Bit Quantized MIMO Systems 论一位量化多输入多输出系统的最优 MMSE 信道估计
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-21 DOI: 10.1109/TSP.2025.3531779
Minhua Ding;Italo Atzeni;Antti Tölli;A. Lee Swindlehurst
{"title":"On Optimal MMSE Channel Estimation for One-Bit Quantized MIMO Systems","authors":"Minhua Ding;Italo Atzeni;Antti Tölli;A. Lee Swindlehurst","doi":"10.1109/TSP.2025.3531779","DOIUrl":"10.1109/TSP.2025.3531779","url":null,"abstract":"This paper focuses on the minimum mean squared error (MMSE) channel estimator for multiple-input multiple-output (MIMO) systems with one-bit quantization at the receiver side. Despite its optimality and significance in estimation theory, the MMSE estimator has not been fully investigated in this context due to its general nonlinearity and computational complexity. Instead, the typically suboptimal Bussgang linear MMSE (BLMMSE) channel estimator has been widely adopted. In this work, we develop a new framework to compute the MMSE channel estimator that hinges on the computation of the orthant probability of a multivariate normal distribution. Based on this framework, we determine a necessary and sufficient condition for the BLMMSE channel estimator to be optimal and thus equivalent to the MMSE estimator. Under the assumption of specific channel correlation or pilot symbols, we further utilize the framework to derive analytical expressions for the MMSE estimator that are particularly convenient for the computation when certain system dimensions become large, thereby enabling a comparison between the BLMMSE and MMSE channel estimators in these cases.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"617-632"},"PeriodicalIF":4.6,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Spectrum Cartography: Reconstructing Spatial-Spectral-Temporal Radio Frequency Map via Tensor Completion 动态频谱制图:通过张量补全重构空间-光谱-时间无线电频率图
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-21 DOI: 10.1109/TSP.2025.3531872
Xiaonan Chen;Jun Wang;Qingyang Huang
{"title":"Dynamic Spectrum Cartography: Reconstructing Spatial-Spectral-Temporal Radio Frequency Map via Tensor Completion","authors":"Xiaonan Chen;Jun Wang;Qingyang Huang","doi":"10.1109/TSP.2025.3531872","DOIUrl":"10.1109/TSP.2025.3531872","url":null,"abstract":"Spectrum cartography (SC) aims to construct a global radio-frequency (RF) map across multiple domains, e.g., space, frequency and time, from sparse sensor samples. Recent state-of-the-art SC methods have successfully established the recoverability of <inline-formula><tex-math>$3$</tex-math></inline-formula>-D spatial-spectral RF maps using identifiable models, such as non-negative matrix factorization (NMF) and block-term tensor decomposition (BTD). However, these models do not account for possible time dynamics in RF environment. This work takes a step forward and focuses on a <inline-formula><tex-math>$4$</tex-math></inline-formula>-D spatial-spectral-temporal SC task under time-varying scenarios. From a data recovery viewpoint, the task is highly ill-posed since the degree of freedom (DoF) in a <inline-formula><tex-math>$4$</tex-math></inline-formula>-D map is extremely high. To address this issue, a two-stage methodology is put forth: for stage one, sensor measurements are unraveled into incomplete RF map w.r.t each emitter; for stage two, individual RF maps are completed in parallel and then synthesize the <inline-formula><tex-math>$4$</tex-math></inline-formula>-D map. In this way, DoF in the recovery process is significantly reduced. Two different algorithms are designed, including a basic batch-based one and a full-fledged streaming one enabling on-line SC. From the theory side, recoverability of the proposed approaches is characterized by certain sampling patterns or complexity. Experiments using synthetic, ray-tracing, and real-world data are employed to showcase the effectiveness of the proposed methods.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"1184-1199"},"PeriodicalIF":4.6,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Source Localization as an Eigenvalue Problem 单源定位作为特征值问题
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-21 DOI: 10.1109/TSP.2025.3532102
Martin Larsson;Viktor Larsson;Kalle Åström;Magnus Oskarsson
{"title":"Single-Source Localization as an Eigenvalue Problem","authors":"Martin Larsson;Viktor Larsson;Kalle Åström;Magnus Oskarsson","doi":"10.1109/TSP.2025.3532102","DOIUrl":"10.1109/TSP.2025.3532102","url":null,"abstract":"This paper introduces a novel method for solving the single-source localization problem, specifically addressing the case of trilateration. We formulate the problem as a weighted least-squares problem in the squared distances and demonstrate how suitable weights are chosen to accommodate different noise distributions. By transforming this formulation into an eigenvalue problem, we leverage existing eigensolvers to achieve a fast, numerically stable, and easily implemented solver. Furthermore, our theoretical analysis establishes that the globally optimal solution corresponds to the largest real eigenvalue, drawing parallels to the existing literature on the trust-region subproblem. Unlike previous works, we give special treatment to degenerate cases, where multiple and possibly infinitely many solutions exist. We provide a geometric interpretation of the solution sets and design the proposed method to handle these cases gracefully. Finally, we validate against a range of state-of-the-art methods using synthetic and real data, demonstrating how the proposed method is among the fastest and most numerically stable.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"574-583"},"PeriodicalIF":4.6,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Convergence of Decentralized Stochastic Gradient Descent With Biased Gradients 论有偏梯度的分散随机梯度下降的收敛性
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-20 DOI: 10.1109/TSP.2025.3531356
Yiming Jiang;Helei Kang;Jinlan Liu;Dongpo Xu
{"title":"On the Convergence of Decentralized Stochastic Gradient Descent With Biased Gradients","authors":"Yiming Jiang;Helei Kang;Jinlan Liu;Dongpo Xu","doi":"10.1109/TSP.2025.3531356","DOIUrl":"10.1109/TSP.2025.3531356","url":null,"abstract":"Stochastic optimization algorithms are widely used to solve large-scale machine learning problems. However, their theoretical analysis necessitates access to unbiased estimates of the true gradients. To address this issue, we perform a comprehensive convergence rate analysis of stochastic gradient descent (SGD) with biased gradients for decentralized optimization. In non-convex settings, we show that for decentralized SGD utilizing biased gradients, the gradient in expectation is bounded asymptotically at a rate of <inline-formula><tex-math>$mathcal{O}(1/sqrt{nT}+n/T)$</tex-math></inline-formula>, and the bound is linearly correlated to the biased gradient gap. In particular, we can recover the convergence results in the unbiased stochastic gradient setting when the biased gradient gap is zero. Lastly, we provide empirical support for our theoretical findings through extensive numerical experiments.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"549-558"},"PeriodicalIF":4.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Center-Based Clustering: A Unified Framework 基于中心的分布式集群:统一框架
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-20 DOI: 10.1109/TSP.2025.3531292
Aleksandar Armacki;Dragana Bajović;Dušan Jakovetić;Soummya Kar
{"title":"Distributed Center-Based Clustering: A Unified Framework","authors":"Aleksandar Armacki;Dragana Bajović;Dušan Jakovetić;Soummya Kar","doi":"10.1109/TSP.2025.3531292","DOIUrl":"10.1109/TSP.2025.3531292","url":null,"abstract":"We develop a family of distributed center-based clustering algorithms that work over connected networks of users. In the proposed scenario, users contain a local dataset and communicate only with their immediate neighbours, with the aim of finding a clustering of the full, joint data. The proposed family, termed Distributed Gradient Clustering (DGC-<inline-formula><tex-math>$mathcal{F}_{rho}$</tex-math></inline-formula>), is parametrized by <inline-formula><tex-math>$rhogeq 1$</tex-math></inline-formula>, controlling the proximity of users’ center estimates, with <inline-formula><tex-math>$mathcal{F}$</tex-math></inline-formula> determining the clustering loss. Our framework allows for a broad class of smooth convex loss functions, including popular clustering losses like <inline-formula><tex-math>$K$</tex-math></inline-formula>-means and Huber loss. Specialized to <inline-formula><tex-math>$K$</tex-math></inline-formula>-means and Huber loss, DGC-<inline-formula><tex-math>$mathcal{F}_{rho}$</tex-math></inline-formula> gives rise to novel distributed clustering algorithms DGC-KM<inline-formula><tex-math>${}_{rho}$</tex-math></inline-formula> and DGC-HL<inline-formula><tex-math>${}_{rho}$</tex-math></inline-formula>, while novel clustering losses based on the logistic and fair loss lead to DGC-LL<inline-formula><tex-math>${}_{rho}$</tex-math></inline-formula> and DGC-FL<inline-formula><tex-math>${}_{rho}$</tex-math></inline-formula>. We provide a unified analysis and establish several strong results, under mild assumptions. First, the sequence of centers generated by the methods converges to a well-defined notion of fixed point, under any center initialization and value of <inline-formula><tex-math>$rho$</tex-math></inline-formula>. Second, as <inline-formula><tex-math>$rho$</tex-math></inline-formula> increases, the family of fixed points produced by DGC-<inline-formula><tex-math>$mathcal{F}_{rho}$</tex-math></inline-formula> converges to a notion of consensus fixed points. We show that consensus fixed points of DGC-<inline-formula><tex-math>$mathcal{F}_{rho}$</tex-math></inline-formula> are equivalent to fixed points of gradient clustering over the full data, guaranteeing a clustering of the full data is produced. For the special case of Bregman losses, we show that our fixed points converge to the set of Lloyd points. Numerical experiments on real data confirm our theoretical findings and demonstrate strong performance of the methods.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"903-918"},"PeriodicalIF":4.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust In-Memory Computation With Bayesian Analog Error Mitigating Codes 基于贝叶斯模拟错误缓解码的鲁棒内存计算
IF 4.6 2区 工程技术
IEEE Transactions on Signal Processing Pub Date : 2025-01-16 DOI: 10.1109/TSP.2025.3530149
Nilesh Kumar Jha;Huayan Guo;Vincent K. N. Lau
{"title":"Robust In-Memory Computation With Bayesian Analog Error Mitigating Codes","authors":"Nilesh Kumar Jha;Huayan Guo;Vincent K. N. Lau","doi":"10.1109/TSP.2025.3530149","DOIUrl":"10.1109/TSP.2025.3530149","url":null,"abstract":"In-memory computation (IMC) is a promising technology for enabling low-latency and energy-efficient deep learning and artificial intelligence (AI) applications at edge devices. However, the IMC crossbar array, typically implemented using resistive random access memory (RRAM), faces hardware defects that pose a significant challenge to reliable computation. This paper presents a robust IMC scheme utilizing Bayesian neural network-accelerated analog codes. Our approach includes a new datapath design comprising a parity matrix generator and a low-complexity decoder module to facilitate analog codes for IMC. Moreover, we introduce a Gaussian mixture model-based error prior to capture impulsive error statistics and leverage variational Bayesian inference (VBI) techniques for training neural network weights. Extensive simulations confirm the effectiveness of our proposed solution compared to various state-of-the-art baseline schemes.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"534-548"},"PeriodicalIF":4.6,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142987669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信