{"title":"Low-Rank Tensor Decompositions for the Theory of Neural Networks: Understanding deep learning through the lens of tensor formats [Special Issue on the Mathematics of Deep Learning]","authors":"Ricardo Borsoi;Konstantin Usevich;Marianne Clausel","doi":"10.1109/MSP.2025.3646570","DOIUrl":"https://doi.org/10.1109/MSP.2025.3646570","url":null,"abstract":"The groundbreaking performance of deep neural networks (NNs) promoted a surge of interest in providing a mathematical basis to deep learning theory. Low-rank tensor decompositions are specially fit for this task due to their close connection to NNs and their rich theoretical results. Different tensor decompositions have strong uniqueness guarantees, which allow for a direct interpretation of their factors, and polynomial time algorithms have been proposed to compute them. Through the connections between tensors and NNs, such results supported many important advances in the theory of NNs. In this review article, we show how low-rank tensor methods—which have been a core tool in the signal processing and machine learning communities—play a fundamental role in theoretically explaining different aspects of the performance of deep NNs, including their expressivity, algorithmic learnability and computational hardness, generalization, and identifiability. Our goal is to give an accessible overview of existing approaches (developed by different communities, ranging from computer science to mathematics) in a coherent and unified way and to open a broader perspective on the use of low-rank tensor decompositions for the theory of deep NNs.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"107-121"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adalbert Fono;Manjot Singh;Ernesto Araya;Philipp C. Petersen;Holger Boche;Gitta Kutyniok
{"title":"Mathematical Foundations of Spiking Neural Networks: Strengths, challenges, and computational paradigm potential [Special Issue on the Mathematics of deep Learning]","authors":"Adalbert Fono;Manjot Singh;Ernesto Araya;Philipp C. Petersen;Holger Boche;Gitta Kutyniok","doi":"10.1109/MSP.2025.3597033","DOIUrl":"https://doi.org/10.1109/MSP.2025.3597033","url":null,"abstract":"Deep learning’s success comes with growing energy demands, raising concerns about the long-term sustainability of the field. Spiking neural networks (SNNs), inspired by biological neurons, offer a promising alternative with potential computational and energy efficiency gains. This article examines the computational properties of spiking networks through the lens of learning theory, focusing on expressivity, training, and generalization, as well as energy-efficient implementations, while comparing them with artificial neural networks (ANNs). By categorizing spiking models based on time representation and information encoding, we highlight their strengths, challenges, and potential as an alternative computational paradigm.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"64-76"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Balzano;Joan Bruna Estrach;Gitta Kutyniok;Robert Nowak;Jong Chul Ye
{"title":"Guest Editorial for Part 1 of the Special Issue on the Mathematics of Deep Learning [From the Guest Editors]","authors":"Laura Balzano;Joan Bruna Estrach;Gitta Kutyniok;Robert Nowak;Jong Chul Ye","doi":"10.1109/MSP.2026.3668468","DOIUrl":"https://doi.org/10.1109/MSP.2026.3668468","url":null,"abstract":"","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"6-8"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11480045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SPS Social Media","authors":"","doi":"10.1109/MSP.2026.3674140","DOIUrl":"https://doi.org/10.1109/MSP.2026.3674140","url":null,"abstract":"","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"50-50"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11480038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Untrained Deep Models for Inverse Problems [From the Guest Editors]: Algorithms and theory","authors":"Ismail Alkhouri;Evan Bell;Avrajit Ghosh;Shijun Liang;Rongrong Wang;Saiprasad Ravishankar","doi":"10.1109/MSP.2025.3632786","DOIUrl":"https://doi.org/10.1109/MSP.2025.3632786","url":null,"abstract":"In recent years, deep learning (DL) methods have been extensively developed for inverse imaging problems (IIPs), encompassing supervised, self-supervised, and generative approaches. Most of these methods require large numbers of labeled or unlabeled training data to learn effective models. However, in many practical applications, such as medical image reconstruction, extensive training datasets are often unavailable or limited. A significant milestone in addressing this challenge came in 2018 with the work of Ulyanov et al. <xref>[1]</xref>, which introduced the deep image prior (DIP)—the first training-data-free convolutional neural network (CNN) method for IIPs. Unlike conventional DL approaches, DIP requires only a CNN, the noisy measurements, and a forward operator. By leveraging the implicit regularization of deep networks initialized with random noise, DIP can learn and restore image structures without relying on external datasets. However, a well-known limitation of DIP is its susceptibility to overfitting, primarily due to overparameterization of the network. In this tutorial article, we provide a comprehensive review of DIP, including a theoretical analysis of its training dynamics. We also categorize and discuss recent advancements in DIP-based methods that are aimed at mitigating overfitting, including techniques such as regularization, network reparameterization, and early stopping. Furthermore, we discuss approaches that combine DIP with pretrained neural networks (NNs), present empirical comparison results against data-centric methods, and highlight open research questions and future directions.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"77-92"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advance Your Signal Processing Education","authors":"","doi":"10.1109/MSP.2026.3673053","DOIUrl":"https://doi.org/10.1109/MSP.2026.3673053","url":null,"abstract":"","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"5-5"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11480055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In Memoriam: Edward R. Dougherty [In Memoriam]","authors":"Ulisses Braga-Neto;Yidong Chen","doi":"10.1109/MSP.2026.3663797","DOIUrl":"https://doi.org/10.1109/MSP.2026.3663797","url":null,"abstract":"Recounts the career and contributions of Edward R. Dougherty.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"122-122"},"PeriodicalIF":9.6,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11480032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147665473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence Foundations: Building essential machine learning skills and understanding [Special Issue on Artificial Intelligence for Education: A Signal Processing Perspective]","authors":"Kurt Butler;Mónica F. Bugallo;Petar M. Djurić","doi":"10.1109/MSP.2025.3604572","DOIUrl":"https://doi.org/10.1109/MSP.2025.3604572","url":null,"abstract":"This article describes a pilot undergraduate course designed to introduce the fundamentals of machine learning (ML) and generative artificial intelligence (AI) in an accessible and practical way to students from diverse disciplines. Emerging from cross-departmental collaboration, the course aims to demystify AI and equip students, particularly those from nontechnical fields, with essential ML and signal processing (SP) concepts, while helping them understand their applications and practice as end users. The article outlines the course content, student outcomes, and considerations for evaluating its effectiveness and potential areas for improvement.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 1","pages":"47-55"},"PeriodicalIF":9.6,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146211279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Future-Proofing Programmers: Optimal knowledge tracing for artificial intelligence-assisted personalized education [Special Issue on Artificial Intelligence for Education: A Signal Processing Perspective]","authors":"Yuchen Wang;Pei-Duo Yu;Chee Wei Tan","doi":"10.1109/MSP.2025.3609896","DOIUrl":"https://doi.org/10.1109/MSP.2025.3609896","url":null,"abstract":"Learning to learn is becoming a science, driven by the convergence of knowledge tracing, signal processing, and generative artificial intelligence (GenAI) to model student learning states and optimize education. We propose CoTutor, an AI-driven model that enhances Bayesian knowledge tracing (BKT) with signal processing techniques to improve student progress modeling and deliver adaptive feedback and strategies. Deployed as an AI copilot, CoTutor combines GenAI with adaptive learning technology. In university trials, it has demonstrated measurable improvements in learning outcomes while outperforming conventional educational tools. Our results highlight its potential for AI-driven personalization, scalability, and future opportunities for advancing privacy and ethical considerations in educational technology. Inspired by Richard Hamming’s vision of computer-aided “learning to learn,” CoTutor applies convex optimization and signal processing to automate and scale up learning analytics, while reserving pedagogical judgment for humans, ensuring that AI facilitates the process of knowledge tracing while enabling learners to uncover new insights.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 1","pages":"69-82"},"PeriodicalIF":9.6,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146211293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}