A Group Regularization Framework of Convolutional Neural Networks Based on the Impact of Lₚ Regularizers on Magnitude

IF 8.6 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Feng Li;Yaokai Hu;Huisheng Zhang;Ansheng Deng;Jacek M. Zurada
{"title":"A Group Regularization Framework of Convolutional Neural Networks Based on the Impact of Lₚ Regularizers on Magnitude","authors":"Feng Li;Yaokai Hu;Huisheng Zhang;Ansheng Deng;Jacek M. Zurada","doi":"10.1109/TSMC.2024.3453549","DOIUrl":null,"url":null,"abstract":"Group regularization is commonly employed in network pruning to achieve structured model compression. However, the rationale behind existing studies on group regularization predominantly hinges on the sparsity capabilities of \n<inline-formula> <tex-math>$L_{p}$ </tex-math></inline-formula>\n regularizers. This singular focus may lead to erroneous interpretations. In response to these limitations, this article proposes a novel framework for evaluating the penalization efficacy of group regularization methods by analyzing the impact of \n<inline-formula> <tex-math>$L_{p}$ </tex-math></inline-formula>\n regularizers on weight magnitudes and weight group magnitudes. Within this framework, we demonstrate that \n<inline-formula> <tex-math>$L_{1,2}$ </tex-math></inline-formula>\n regularization, contrary to prevailing literature, indeed exhibits favorable performance in structured pruning tasks. Motivated by this insight, we introduce a hybrid group regularization approach that integrates \n<inline-formula> <tex-math>$L_{1,2}$ </tex-math></inline-formula>\n regularization and group \n<inline-formula> <tex-math>$L_{1/2}$ </tex-math></inline-formula>\n regularization (denoted as HGL1,2&\n<inline-formula> <tex-math>$L_{1/2}$ </tex-math></inline-formula>\n). This novel method addresses the challenge of selecting appropriate \n<inline-formula> <tex-math>$L_{p}$ </tex-math></inline-formula>\n regularizers for penalizing weight groups by leveraging \n<inline-formula> <tex-math>$L_{1,2}$ </tex-math></inline-formula>\n regularization for penalizing groups with magnitudes exceeding a critical threshold while employing group \n<inline-formula> <tex-math>$L_{1/2}$ </tex-math></inline-formula>\n regularization for other groups. Experimental evaluations are conducted to verify the efficiency of the proposed hybrid group regularization method and the viability of the introduced framework.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"54 12","pages":"7434-7444"},"PeriodicalIF":8.6000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10695098/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Group regularization is commonly employed in network pruning to achieve structured model compression. However, the rationale behind existing studies on group regularization predominantly hinges on the sparsity capabilities of $L_{p}$ regularizers. This singular focus may lead to erroneous interpretations. In response to these limitations, this article proposes a novel framework for evaluating the penalization efficacy of group regularization methods by analyzing the impact of $L_{p}$ regularizers on weight magnitudes and weight group magnitudes. Within this framework, we demonstrate that $L_{1,2}$ regularization, contrary to prevailing literature, indeed exhibits favorable performance in structured pruning tasks. Motivated by this insight, we introduce a hybrid group regularization approach that integrates $L_{1,2}$ regularization and group $L_{1/2}$ regularization (denoted as HGL1,2& $L_{1/2}$ ). This novel method addresses the challenge of selecting appropriate $L_{p}$ regularizers for penalizing weight groups by leveraging $L_{1,2}$ regularization for penalizing groups with magnitudes exceeding a critical threshold while employing group $L_{1/2}$ regularization for other groups. Experimental evaluations are conducted to verify the efficiency of the proposed hybrid group regularization method and the viability of the introduced framework.
基于 Lₚ 正则对幅度影响的卷积神经网络组正则化框架
组正则化通常用于网络剪枝,以实现结构化模型压缩。然而,现有的群正则化研究主要是基于 $L_{p}$ 正则化器的稀疏性能力。这种单一的关注点可能会导致错误的解释。针对这些局限性,本文提出了一个新的框架,通过分析 $L_{p}$ 正则化器对权重大小和权重组大小的影响,来评估分组正则化方法的惩罚效果。在这一框架内,我们证明了 $L_{1,2}$ 正则化与流行的文献相反,在结构剪枝任务中确实表现出了良好的性能。受此启发,我们引入了一种混合组正则化方法,它整合了 $L_{1,2}$ 正则化和 $L_{1/2}$ 组正则化(表示为 HGL1,2& $L_{1/2}$ )。这种新方法利用 $L_{1,2}$ 正则化来惩罚幅值超过临界阈值的权重组,同时对其他权重组采用组 $L_{1/2}$ 正则化,从而解决了为惩罚权重组选择适当 $L_{p}$ 正则化的难题。实验评估验证了所提出的混合组正则化方法的效率和所引入框架的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Systems Man Cybernetics-Systems
IEEE Transactions on Systems Man Cybernetics-Systems AUTOMATION & CONTROL SYSTEMS-COMPUTER SCIENCE, CYBERNETICS
CiteScore
18.50
自引率
11.50%
发文量
812
审稿时长
6 months
期刊介绍: The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信