Rapture of the Deep: Highs and lows of sparsity in a world of depths

IF 9.6 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
IEEE Signal Processing Magazine Pub Date : 2026-03-01 Epub Date: 2026-04-13 DOI:10.1109/MSP.2025.3611564
Rémi Gribonval;Elisa Riccietti;Quoc-Tung Le;Léon Zheng
{"title":"Rapture of the Deep: Highs and lows of sparsity in a world of depths","authors":"Rémi Gribonval;Elisa Riccietti;Quoc-Tung Le;Léon Zheng","doi":"10.1109/MSP.2025.3611564","DOIUrl":null,"url":null,"abstract":"Promoting sparsity in deep networks is a natural way to control their complexity, and it is a timely endeavor since practical neural model sizes have grown to unprecedented levels. The lessons from sparsity in linear inverse problems also bear the promise of many other benefits beyond such computational aspects, from statistical significance to explainability. Can these promises be fulfilled? Can we safely leverage the know-how of sparsity-promoting regularizers for inverse problems to harness sparsity in deeper contexts, linear or not? This article surveys the curses and blessings of deep sparsity. After a reminder on the main lessons from inverse problems, we tour a number of results that challenge their immediate deep extensions, from both a mathematical and a computational perspective. In particular, we highlight that <inline-formula><tex-math>${\\mathit{\\ell}}^{1}$</tex-math></inline-formula> regularization does not always lead to sparsity, and that optimization with a prescribed set of allowed nonzero coefficients can be NP-hard. We emphasize the role of rescaling invariances in these phenomena and the need to favor structured sparsity to keep sparse network training problems under control, ensure their stability, and actually enable efficient network implementations on GPUs. We finally outline the promises and challenges of a flexible family of <italic>Kronecker sparsity structures</i>, which extend the classical butterfly structure and appear in many classical scientific computing applications and that have also recently emerged in deep learning.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"43 2","pages":"10-23"},"PeriodicalIF":9.6000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11480036/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/4/13 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Promoting sparsity in deep networks is a natural way to control their complexity, and it is a timely endeavor since practical neural model sizes have grown to unprecedented levels. The lessons from sparsity in linear inverse problems also bear the promise of many other benefits beyond such computational aspects, from statistical significance to explainability. Can these promises be fulfilled? Can we safely leverage the know-how of sparsity-promoting regularizers for inverse problems to harness sparsity in deeper contexts, linear or not? This article surveys the curses and blessings of deep sparsity. After a reminder on the main lessons from inverse problems, we tour a number of results that challenge their immediate deep extensions, from both a mathematical and a computational perspective. In particular, we highlight that ${\mathit{\ell}}^{1}$ regularization does not always lead to sparsity, and that optimization with a prescribed set of allowed nonzero coefficients can be NP-hard. We emphasize the role of rescaling invariances in these phenomena and the need to favor structured sparsity to keep sparse network training problems under control, ensure their stability, and actually enable efficient network implementations on GPUs. We finally outline the promises and challenges of a flexible family of Kronecker sparsity structures, which extend the classical butterfly structure and appear in many classical scientific computing applications and that have also recently emerged in deep learning.
深渊的狂喜:深渊世界中稀疏的高低
提高深度网络的稀疏性是控制其复杂性的一种自然方式,而且由于实际神经模型的规模已经增长到前所未有的水平,这是一项及时的努力。从线性逆问题的稀疏性中得到的经验教训,除了这些计算方面之外,还带来了许多其他好处,从统计意义到可解释性。这些承诺能实现吗?我们能否安全地利用反问题的稀疏性促进正则化的专有技术,在更深的上下文中利用稀疏性,无论是线性的还是非线性的?本文调查了深度稀疏的害处和好处。在回顾了反问题的主要教训之后,我们从数学和计算的角度来看了一些挑战其直接深度扩展的结果。特别地,我们强调${\mathit{\ well}}^{1}$正则化并不总是导致稀疏性,并且使用规定的允许非零系数集进行优化可能是np困难的。我们强调在这些现象中重新缩放不变性的作用,以及支持结构化稀疏性的必要性,以保持稀疏网络训练问题处于控制之下,确保其稳定性,并在gpu上实际实现高效的网络实现。我们最后概述了灵活的Kronecker稀疏结构家族的前景和挑战,它们扩展了经典的蝴蝶结构,出现在许多经典的科学计算应用中,最近也出现在深度学习中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Signal Processing Magazine
IEEE Signal Processing Magazine 工程技术-工程:电子与电气
CiteScore
27.20
自引率
0.70%
发文量
123
审稿时长
6-12 weeks
期刊介绍: EEE Signal Processing Magazine is a publication that focuses on signal processing research and applications. It publishes tutorial-style articles, columns, and forums that cover a wide range of topics related to signal processing. The magazine aims to provide the research, educational, and professional communities with the latest technical developments, issues, and events in the field. It serves as the main communication platform for the society, addressing important matters that concern all members.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书