Wandong Zhang, Yimin Yang, Thangarajah Akilan, Q M Jonathan Wu, Tianlong Liu
{"title":"Fast Transfer Learning Method Using Random Layer Freezing and Feature Refinement Strategy.","authors":"Wandong Zhang, Yimin Yang, Thangarajah Akilan, Q M Jonathan Wu, Tianlong Liu","doi":"10.1109/TCYB.2024.3483068","DOIUrl":null,"url":null,"abstract":"<p><p>Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).</p>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"PP ","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TCYB.2024.3483068","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, Moore-Penrose inverse (MPI)-based parameter fine-tuning of fully connected (FC) layers in pretrained deep convolutional neural networks (DCNNs) has emerged within the inductive transfer learning (ITL) paradigm. However, this approach has not gained significant traction in practical applications due to its stringent computational requirements. This work addresses this issue through a novel fast retraining strategy that enhances applicability of the MPI-based ITL. Specifically, during each retraining epoch, a random layer freezing protocol is utilized to manage the number of layers undergoing feature refinement. Additionally, this work incorporates an MPI-based approach for refining the trainable parameters of FC layers under batch processing, contributing to expedited convergence. Extensive experiments on several ImageNet pretrained benchmark DCNNs demonstrate that the proposed ITL achieves competitive performance with excellent convergence speed compared to conventional ITL methods. For instance, the proposed strategy converges nearly 1.5 times faster than retraining the ImageNet pretrained ResNet-50 using stochastic gradient descent with momentum (SGDM).
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.