Self identity mapping

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiuding Cai , Yaoyao Zhu , Linjie Fu , Dong Miao , Yu Yao
{"title":"Self identity mapping","authors":"Xiuding Cai ,&nbsp;Yaoyao Zhu ,&nbsp;Linjie Fu ,&nbsp;Dong Miao ,&nbsp;Yu Yao","doi":"10.1016/j.neunet.2025.108132","DOIUrl":null,"url":null,"abstract":"<div><div>Regularization is essential in deep learning to enhance generalization and mitigate overfitting. However, conventional techniques often rely on heuristics, making them less reliable or effective across diverse settings. We propose Self Identity Mapping (SIM), a simple yet effective, data-intrinsic regularization framework that leverages an inverse mapping mechanism to enhance representation learning. By reconstructing the input from its transformed output, SIM reduces information loss during forward propagation and facilitates smoother gradient flow. To address computational inefficiencies, We instantiate SIM as <span><math><mrow><mi>ρ</mi><mtext>SIM</mtext></mrow></math></span> by incorporating patch-level feature sampling and projection-based method to reconstruct latent features, effectively lowering complexity. As a model-agnostic, task-agnostic regularizer, SIM can be seamlessly integrated as a plug-and-play module, making it applicable to different network architectures and tasks. We extensively evaluate <span><math><mrow><mi>ρ</mi><mtext>SIM</mtext></mrow></math></span> across three tasks: image classification, few-shot prompt learning, and domain generalization. Experimental results show consistent improvements over baseline methods, highlighting <span><math><mrow><mi>ρ</mi><mtext>SIM</mtext></mrow></math></span>’s ability to enhance representation learning across various tasks. We also demonstrate that <span><math><mrow><mi>ρ</mi><mtext>SIM</mtext></mrow></math></span> is orthogonal to existing regularization methods, boosting their effectiveness. Moreover, our results confirm that <span><math><mrow><mi>ρ</mi><mtext>SIM</mtext></mrow></math></span> effectively preserves semantic information and enhances performance in dense-to-dense tasks, such as semantic segmentation and image translation, as well as in non-visual domains including audio classification and time series anomaly detection.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108132"},"PeriodicalIF":6.3000,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010123","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Regularization is essential in deep learning to enhance generalization and mitigate overfitting. However, conventional techniques often rely on heuristics, making them less reliable or effective across diverse settings. We propose Self Identity Mapping (SIM), a simple yet effective, data-intrinsic regularization framework that leverages an inverse mapping mechanism to enhance representation learning. By reconstructing the input from its transformed output, SIM reduces information loss during forward propagation and facilitates smoother gradient flow. To address computational inefficiencies, We instantiate SIM as ρSIM by incorporating patch-level feature sampling and projection-based method to reconstruct latent features, effectively lowering complexity. As a model-agnostic, task-agnostic regularizer, SIM can be seamlessly integrated as a plug-and-play module, making it applicable to different network architectures and tasks. We extensively evaluate ρSIM across three tasks: image classification, few-shot prompt learning, and domain generalization. Experimental results show consistent improvements over baseline methods, highlighting ρSIM’s ability to enhance representation learning across various tasks. We also demonstrate that ρSIM is orthogonal to existing regularization methods, boosting their effectiveness. Moreover, our results confirm that ρSIM effectively preserves semantic information and enhances performance in dense-to-dense tasks, such as semantic segmentation and image translation, as well as in non-visual domains including audio classification and time series anomaly detection.
自我同一性映射
在深度学习中,正则化是增强泛化和减轻过拟合的关键。然而,传统的技术往往依赖于启发式,使得它们在不同的环境中不太可靠或有效。我们提出了自我同一性映射(SIM),这是一个简单而有效的数据内在正则化框架,它利用逆映射机制来增强表征学习。通过从变换后的输出重建输入,SIM减少了前向传播过程中的信息损失,使梯度流更平滑。为了解决计算效率低下的问题,我们将SIM实例化为ρSIM,结合补丁级特征采样和基于投影的方法重构潜在特征,有效降低了复杂度。作为一个模型不可知、任务不可知的正则化器,SIM可以作为一个即插即用模块无缝集成,使其适用于不同的网络架构和任务。我们在三个任务中广泛评估ρSIM:图像分类,少量提示学习和领域泛化。实验结果表明,与基线方法相比,ρSIM在不同任务中的表征学习能力得到了显著提高。我们还证明了ρSIM与现有的正则化方法是正交的,提高了它们的有效性。此外,我们的研究结果证实,ρSIM有效地保留了语义信息,并提高了语义分割和图像翻译等密集到密集任务的性能,以及音频分类和时间序列异常检测等非视觉领域的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信