Tao Fan;Augusto Aubry;Vincenzo Carotenuto;Antonio De Maio;Xianxiang Yu;Guolong Cui
{"title":"Radar Code Design for the Joint Optimization of Detection Performance and Measurement Accuracy in Track Maintenance","authors":"Tao Fan;Augusto Aubry;Vincenzo Carotenuto;Antonio De Maio;Xianxiang Yu;Guolong Cui","doi":"10.1109/TSP.2025.3587522","DOIUrl":"https://doi.org/10.1109/TSP.2025.3587522","url":null,"abstract":"This paper deals with the design of slow-time coded waveforms which jointly optimize the detection probability and the measurements accuracy for track maintenance in the presence of colored Gaussian interference. The output signal-to-interference-plus-noise ratio (SINR) and Cramér Rao bounds (CRBs) on time delay and Doppler shift are used as figures of merit to accomplish reliable detection as well as accurate measurements. The transmitted code is subject to radar power budget requirements and a similarity constraint. To tackle the resulting non-convex multi-objective optimization problem, a polynomial-time algorithm that integrates scalarization and tensor-based relaxation methods is developed. The corresponding relaxed multi-linear problems are solved by means of the maximum block improvement (MBI) framework, where the optimal solution at each iteration is obtained in closed form. Numeral results demonstrate the trade-off between the detection and the estimation performance, along with the acceptable Doppler robustness achieved by the proposed algorithm.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3173-3186"},"PeriodicalIF":5.8,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepFRI: A Deep Plug-and-Play Technique for Finite-Rate-of-Innovation Signal Reconstruction","authors":"Abijith Jagannath Kamath;Sharan Basav Patil;Chandra Sekhar Seelamantula","doi":"10.1109/TSP.2025.3589394","DOIUrl":"10.1109/TSP.2025.3589394","url":null,"abstract":"The finite-rate-of-innovation (FRI) sampling framework is a sample-efficient and power-efficient model for analog-to-digital conversion. It can be interpreted as a framework for performing continuous-domain sparse deconvolution starting from discrete measurements. The promise of the FRI framework is its ability to resolve time delays beyond conventional theoretical limits, while acquiring measurements at the rate of innovation. In the current state-of-the-art, application of the FRI framework to real-world problems is challenging due to its limited performance in the presence of noise. In this paper, we consider signal reconstruction in the Fourier domain and propose a new optimization formulation that solves for the Fourier coefficients. We employ the proximal gradient method, and analyze the role of the denoiser in a plug-and-play (PnP) setting. Within the proposed framework, it is sufficient for the denoiser to be Lipschitz continuous, thus motivating the application of a deep PnP denoising neural network with a continuous piecewise-linear architecture. Such a neural network is interpretable and possesses similar theoretical guarantees as model-based techniques, while obtaining superior performance in the estimation of signal parameters when the signal-to-noise ratio (SNR) is low. Since the technique is derived from an optimization algorithm, we use the ensemble strategy to combine the Cadzow denoiser, which is widely used in FRI problems, and the deep PnP denoiser in order to achieve perfect reconstruction in the high SNR regime. The resulting method is called <italic>DeepFRI</i>. On synthetically generated signals, the proposed technique offers up to one order of improvement in estimating the signal parameters in the low SNR regime compared with the benchmark techniques, while performing on par with them in the high SNR regime. We demonstrate an application to real-world ultrasound signals and show that the proposed technique offers superior reconstruction performance with respect to the benchmarks.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"2998-3013"},"PeriodicalIF":5.8,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low Tensor-Rank Adaptation of Kolmogorov–Arnold Networks","authors":"Yihang Gao;Michael K. Ng;Vincent Y. F. Tan","doi":"10.1109/TSP.2025.3588910","DOIUrl":"10.1109/TSP.2025.3588910","url":null,"abstract":"Kolmogorov–Arnold networks (KANs) have demonstrated their potential as an alternative to multi-layer perceptrons (MLPs) in various domains, especially for science-related tasks. However, transfer learning of KANs remains a relatively unexplored area. In this paper, inspired by Tucker decomposition of tensors and evidence on the low tensor-rank structure in KAN parameter updates, we develop low tensor-rank adaptation (LoTRA) for fine-tuning KANs. We study the expressiveness of LoTRA based on Tucker decomposition approximations. Furthermore, we provide a theoretical analysis to select the learning rates for each LoTRA component to enable efficient training. Our analysis also shows that using identical learning rates across all components leads to inefficient training, highlighting the need for an adaptive learning rate strategy. Beyond theoretical insights, we explore the application of LoTRA for efficiently solving various partial differential equations (PDEs) by fine-tuning KANs. Additionally, we propose Slim KANs that incorporate the inherent low-tensor-rank properties of KAN parameter tensors to reduce model size while maintaining superior performance. Experimental results validate the efficacy of the proposed learning rate selection strategy and demonstrate the effectiveness of LoTRA for transfer learning of KANs in solving PDEs. Further evaluations on Slim KANs for function representation and image classification tasks highlight the expressiveness of LoTRA and the potential for parameter reduction through low tensor-rank decomposition.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3107-3123"},"PeriodicalIF":5.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Block Tensor Ring Decomposition: Theory and Application","authors":"Sheng Liu;Xi-Le Zhao;Hao Zhang","doi":"10.1109/TSP.2025.3589059","DOIUrl":"10.1109/TSP.2025.3589059","url":null,"abstract":"Recently, tensor decompositions have received significant attention for processing multi-dimensional signals, especially representative by the block-term decomposition (BTD) family and the tensor network decomposition (TND) family. However, these two families have long been isolated from each other, with their respective wisdom neither inspiring nor benefiting each other. To address this dilemma, we propose a block tensor ring decomposition (BTRD), which decomposes an <inline-formula><tex-math>$N$</tex-math></inline-formula>th-order tensor into a sum of outer products between basic vector factors and the <inline-formula><tex-math>$(N-1)$</tex-math></inline-formula>th-order coefficient tensors, which are further represented using a tensor ring. The benefit of the BTRD is that it can better explore outer multiple components structure of the tensor and inner tensor topology of each component. To examine the potential of the proposed BTRD, we apply it to a low-rank tensor completion model as a representative task and prove a theoretical generalization error bound which provides a theoretical perspective to support the advantages of the proposed model for higher-order tensors. To address the resulting optimization problem, we apply an efficient proximal alternating minimization (PAM)-based algorithm with a theoretical convergence guarantee. Extensive experimental results on real-world signal data (color videos and light field images) demonstrate the superiority of the proposed model against the state-of-the-art baseline models.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3029-3043"},"PeriodicalIF":5.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amirhossein Javaheri;Jiaxi Ying;Daniel P. Palomar;Farokh Marvasti
{"title":"Time-Varying Graph Learning for Data With Heavy-Tailed Distribution","authors":"Amirhossein Javaheri;Jiaxi Ying;Daniel P. Palomar;Farokh Marvasti","doi":"10.1109/TSP.2025.3588173","DOIUrl":"10.1109/TSP.2025.3588173","url":null,"abstract":"Graph models provide efficient tools to capture the underlying structure of data defined over networks. Many real-world network topologies are subject to change over time. Learning to model the dynamic interactions between entities in such networks is known as time-varying graph learning. Current methodology for learning such models often lacks robustness to outliers in the data and fails to handle heavy-tailed distributions, a common feature in many real-world datasets (e.g., financial data). This paper addresses the problem of learning time-varying graph models capable of efficiently representing heavy-tailed data. Unlike traditional approaches, we incorporate graph structures with specific spectral properties to enhance data clustering in our model. Our proposed method, which can also deal with noise and missing values in the data, is based on a stochastic approach, where a non-negative vector auto-regressive (VAR) model captures the variations in the graph and a Student-t distribution models the signal originating from this underlying time-varying graph. We propose an iterative method to learn time-varying graph topologies within a semi-online framework where only a mini-batch of data is used to update the graph. Simulations with both synthetic and real datasets demonstrate the efficacy of our model in analyzing heavy-tailed data, particularly those found in financial markets.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3044-3060"},"PeriodicalIF":5.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenhao Wang;Zihan Chen;Nikolaos Pappas;Howard H. Yang;Tony Q. S. Quek;H. Vincent Poor
{"title":"Adaptive Federated Learning Over the Air","authors":"Chenhao Wang;Zihan Chen;Nikolaos Pappas;Howard H. Yang;Tony Q. S. Quek;H. Vincent Poor","doi":"10.1109/TSP.2025.3585002","DOIUrl":"10.1109/TSP.2025.3585002","url":null,"abstract":"We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training. This approach capitalizes on the inherent superposition property of wireless channels, facilitating fast and scalable parameter aggregation. Meanwhile, it enhances the robustness of the model training process by dynamically adjusting the stepsize in accordance with the global gradient update. We derive the convergence rate of the training algorithms for a broad spectrum of nonconvex loss functions, encompassing the effects of channel fading, and interference that follows a heavy-tailed distribution. Our analysis shows that the AdaGrad-based algorithm converges to a stationary point at the rate of <inline-formula><tex-math>$mathcal{O}(ln{(T)}/{T^{1-frac{1}{alpha}}})$</tex-math></inline-formula>, where <inline-formula><tex-math>$alpha$</tex-math></inline-formula> represents the tail index of the electromagnetic interference. This result indicates that the level of heavy-tailedness in interference distribution plays a crucial role in the training efficiency: the heavier the tail, the slower the algorithm converges. In contrast, an Adam-like algorithm converges at the <inline-formula><tex-math>$mathcal{O}(1/T)$</tex-math></inline-formula> rate, demonstrating its advantage in expediting the model training process. We conduct extensive experiments that corroborate our theoretical findings and affirm the practical efficacy of our proposed federated adaptive gradient methods.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"3187-3202"},"PeriodicalIF":5.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finite Sample Analysis of Distribution-Free Confidence Ellipsoids for Linear Regression","authors":"Szabolcs Szentpéteri, Balázs Csanád Csáji","doi":"10.1109/tsp.2025.3588333","DOIUrl":"https://doi.org/10.1109/tsp.2025.3588333","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"1 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Jen Yang, Ming-Hsun Yang, Jwo-Yuh Wu, Y.-W. Peter Hong
{"title":"Compressed Sensor Caching and Collaborative Sparse Data Recovery with Anchor Alignment","authors":"Yi-Jen Yang, Ming-Hsun Yang, Jwo-Yuh Wu, Y.-W. Peter Hong","doi":"10.1109/tsp.2025.3588354","DOIUrl":"https://doi.org/10.1109/tsp.2025.3588354","url":null,"abstract":"","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"34 1","pages":""},"PeriodicalIF":5.4,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exponentially Consistent Nonparametric Linkage-Based Clustering of Data Sequences","authors":"Bhupender Singh;Ananth Ram Rajagopalan;Srikrishna Bhashyam","doi":"10.1109/TSP.2025.3588351","DOIUrl":"10.1109/TSP.2025.3588351","url":null,"abstract":"In this paper, we consider nonparametric clustering of <inline-formula><tex-math>$M$</tex-math></inline-formula> independent and identically distributed (i.i.d.) data sequences generated from <italic>unknown</i> distributions. The distributions of the <inline-formula><tex-math>$M$</tex-math></inline-formula> data sequences belong to <inline-formula><tex-math>$K$</tex-math></inline-formula> underlying distribution clusters. Existing results on exponentially consistent nonparametric clustering algorithms, like single linkage-based (SLINK) clustering and <inline-formula><tex-math>$k$</tex-math></inline-formula>-medoids distribution clustering, assume that the maximum intra-cluster distance (<inline-formula><tex-math>$d_{L}$</tex-math></inline-formula>) is smaller than the minimum inter-cluster distance (<inline-formula><tex-math>$d_{H}$</tex-math></inline-formula>). First, in the fixed sample size (FSS) setting, we show that exponential consistency can be achieved for SLINK clustering under a less strict assumption, <inline-formula><tex-math>$d_{I} < d_{H}$</tex-math></inline-formula>, where <inline-formula><tex-math>$d_{I}$</tex-math></inline-formula> is the maximum distance between any two sub-clusters of a cluster that partition the cluster. Note that <inline-formula><tex-math>$d_{I} < d_{L}$</tex-math></inline-formula> in general. Thus, our results show that SLINK is exponentially consistent for a larger class of problems than previously known. In our simulations, we also identify examples where <inline-formula><tex-math>$k$</tex-math></inline-formula>-medoids clustering is unable to find the true clusters, but SLINK is exponentially consistent. Then, we propose a sequential clustering algorithm, named SLINK-SEQ, based on SLINK and prove that it is also exponentially consistent. Simulation results show that the SLINK-SEQ algorithm requires fewer expected number of samples than the FSS SLINK algorithm for the same probability of error.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"2819-2832"},"PeriodicalIF":5.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kalman Filter Aided Federated Koopman Learning","authors":"Yutao Chen;Wei Chen","doi":"10.1109/TSP.2025.3587329","DOIUrl":"10.1109/TSP.2025.3587329","url":null,"abstract":"Real-time control and estimation are pivotal for applications such as industrial automation and future healthcare. The realization of this vision relies heavily on efficient interactions with nonlinear systems. Therefore, Koopman learning, which leverages the power of deep learning to linearize nonlinear systems, has been one of the most successful examples of mitigating the complexity inherent in nonlinearity. However, the existing literature assumes access to accurate system states and abundant high-quality data for Koopman analysis, which is usually impractical in real-world scenarios. To fill this void, this paper considers the case where only observations of the system are available and where the observation data is insufficient to accomplish an independent Koopman analysis. To this end, we propose Kalman Filter aided Federated Koopman Learning (KF-FedKL), which pioneers the combination of Kalman filtering and federated learning with Koopman analysis. By doing so, we can achieve collaborative linearization with privacy guarantees. Specifically, we employ a straightforward yet efficient loss function to drive the training of a deep Koopman network for linearization. To obtain system information devoid of individual information from observation data, we leverage the unscented Kalman filter and the unscented Rauch-Tung-Striebel smoother. To achieve collaboration between clients, we adopt the federated learning framework and develop a modified FedAvg algorithm to orchestrate the collaboration. A convergence analysis of the proposed framework is also presented. Finally, through extensive numerical simulations, we showcase the performance of KF-FedKL under various situations.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"2879-2895"},"PeriodicalIF":5.8,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144603498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}