{"title":"Rapture of the Deep: Highs and lows of sparsity in a world of depths","authors":"Rémi Gribonval;Elisa Riccietti;Quoc-Tung Le;Léon Zheng","doi":"10.1109/MSP.2025.3611564","DOIUrl":null,"url":null,"abstract":"Promoting sparsity in deep networks is a natural way to control their complexity, and it is a timely endeavor since practical neural model sizes have grown to unprecedented levels. The lessons from sparsity in linear inverse problems also bear the promise of many other benefits beyond such computational aspects, from statistical significance to explainability. Can these promises be fulfilled? Can we safely leverage the know-how of sparsity-promoting regularizers for inverse problems to harness sparsity in deeper contexts, linear or not? This article surveys the curses and blessings of deep sparsity. After a reminder on the main lessons from inverse problems, we tour a number of results that challenge their immediate deep extensions, from both a mathematical and a computational perspective. In particular, we highlight that <inline-formula><tex-math>${\\mathit{\\ell}}^{1}$</tex-math></inline-formula> regularization does not always lead to sparsity, and that optimization with a prescribed set of allowed nonzero coefficients can be NP-hard. We emphasize the role of rescaling invariances in these phenomena and the need to favor structured sparsity to keep sparse network training problems under control, ensure their stability, and actually enable efficient network implementations on GPUs. We finally outline the promises and challenges of a flexible family of <italic>Kronecker sparsity structures</i>, which extend the classical butterfly structure and appear in many classical scientific computing applications and that have also recently emerged in deep learning.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"10-23"},"PeriodicalIF":9.6000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11480036/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/4/13 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Promoting sparsity in deep networks is a natural way to control their complexity, and it is a timely endeavor since practical neural model sizes have grown to unprecedented levels. The lessons from sparsity in linear inverse problems also bear the promise of many other benefits beyond such computational aspects, from statistical significance to explainability. Can these promises be fulfilled? Can we safely leverage the know-how of sparsity-promoting regularizers for inverse problems to harness sparsity in deeper contexts, linear or not? This article surveys the curses and blessings of deep sparsity. After a reminder on the main lessons from inverse problems, we tour a number of results that challenge their immediate deep extensions, from both a mathematical and a computational perspective. In particular, we highlight that ${\mathit{\ell}}^{1}$ regularization does not always lead to sparsity, and that optimization with a prescribed set of allowed nonzero coefficients can be NP-hard. We emphasize the role of rescaling invariances in these phenomena and the need to favor structured sparsity to keep sparse network training problems under control, ensure their stability, and actually enable efficient network implementations on GPUs. We finally outline the promises and challenges of a flexible family of Kronecker sparsity structures, which extend the classical butterfly structure and appear in many classical scientific computing applications and that have also recently emerged in deep learning.
期刊介绍:
EEE Signal Processing Magazine is a publication that focuses on signal processing research and applications. It publishes tutorial-style articles, columns, and forums that cover a wide range of topics related to signal processing. The magazine aims to provide the research, educational, and professional communities with the latest technical developments, issues, and events in the field. It serves as the main communication platform for the society, addressing important matters that concern all members.