Gaihua Wang, Xu Zheng, Lei Cheng, Xizhou Wan, Zhao Guo
{"title":"Improved few shot learning classification methods fused with attention mechanism","authors":"Gaihua Wang, Xu Zheng, Lei Cheng, Xizhou Wan, Zhao Guo","doi":"10.1109/ICETCI53161.2021.9563545","DOIUrl":null,"url":null,"abstract":"In view of the problem that deep learning training samples are less likely to be over-fitted, and the embedding module is easy to ignore important feature information, this paper proposes an improved few-shot learning classification method that integrates the attention mechanism. The channel attention mechanism and the fusion attention mechanism is embedded in different stages of the network to extract semantic and texture features of different scales. In addition, the swish function is introduced into the embedding module, which effectively reduces the dependence between parameters and alleviates the occurrence of overfitting, which better shows the nonlinear modeling ability of the few-shot learning network. Test results on public data sets such as Omniglot and miniImagenet have improved. The experimental results show that the proposed method can effectively extract complex important feature information in the data set and alleviate the occurrence of overfitting with a small number of training samples, and is useful in image classification tasks. A good performance improvement has been achieved in the.","PeriodicalId":170858,"journal":{"name":"2021 IEEE International Conference on Electronic Technology, Communication and Information (ICETCI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Electronic Technology, Communication and Information (ICETCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICETCI53161.2021.9563545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In view of the problem that deep learning training samples are less likely to be over-fitted, and the embedding module is easy to ignore important feature information, this paper proposes an improved few-shot learning classification method that integrates the attention mechanism. The channel attention mechanism and the fusion attention mechanism is embedded in different stages of the network to extract semantic and texture features of different scales. In addition, the swish function is introduced into the embedding module, which effectively reduces the dependence between parameters and alleviates the occurrence of overfitting, which better shows the nonlinear modeling ability of the few-shot learning network. Test results on public data sets such as Omniglot and miniImagenet have improved. The experimental results show that the proposed method can effectively extract complex important feature information in the data set and alleviate the occurrence of overfitting with a small number of training samples, and is useful in image classification tasks. A good performance improvement has been achieved in the.