使用随机层冻结和特征细化策略的快速转移学习法

IF 9.4 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Wandong Zhang, Yimin Yang, Thangarajah Akilan, Q M Jonathan Wu, Tianlong Liu
{"title":"使用随机层冻结和特征细化策略的快速转移学习法","authors":"Wandong Zhang, Yimin Yang, Thangarajah Akilan, Q M Jonathan Wu, Tianlong Liu","doi":"10.1109/TCYB.2024.3483068","DOIUrl":null,"url":null,"abstract":"<p><p>Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).</p>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"PP ","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Transfer Learning Method Using Random Layer Freezing and Feature Refinement Strategy.\",\"authors\":\"Wandong Zhang, Yimin Yang, Thangarajah Akilan, Q M Jonathan Wu, Tianlong Liu\",\"doi\":\"10.1109/TCYB.2024.3483068\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).</p>\",\"PeriodicalId\":13112,\"journal\":{\"name\":\"IEEE Transactions on Cybernetics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2024-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cybernetics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TCYB.2024.3483068\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TCYB.2024.3483068","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

最近,在归纳迁移学习(ITL)范例中出现了基于摩尔-彭罗斯逆(MPI)的参数微调,用于预训练深度卷积神经网络(DCN)中的全连接(FC)层。然而,由于其苛刻的计算要求,这种方法在实际应用中并未获得显著的吸引力。本研究通过一种新颖的快速再训练策略解决了这一问题,从而提高了基于 MPI 的 ITL 的适用性。具体来说,在每个再训练历时中,利用随机层冻结协议来管理进行特征细化的层数。此外,这项工作还采用了一种基于 MPI 的方法,在批量处理下完善 FC 层的可训练参数,从而加快收敛速度。在多个 ImageNet 预训练基准 DCNN 上进行的大量实验表明,与传统的 ITL 方法相比,所提出的 ITL 具有极佳的收敛速度和极具竞争力的性能。例如,与使用动量随机梯度下降法(SGDM)重新训练 ImageNet 预训练的 ResNet-50 相比,所提出策略的收敛速度快了近 1.5 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fast Transfer Learning Method Using Random Layer Freezing and Feature Refinement Strategy.

Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信