Sifeng Wang;Shengxiang Li;Anran Li;Zhaoan Dong;Guangshun Li;Chao Yan
{"title":"A Fine-Grained Image Classification Model Based on Hybrid Attention and Pyramidal Convolution","authors":"Sifeng Wang;Shengxiang Li;Anran Li;Zhaoan Dong;Guangshun Li;Chao Yan","doi":"10.26599/TST.2024.9010025","DOIUrl":null,"url":null,"abstract":"Finding more specific subcategories within a larger category is the goal of fine-grained image classification (FGIC), and the key is to find local discriminative regions of visual features. Most existing methods use traditional convolutional operations to achieve fine-grained image classification. However, traditional convolution cannot extract multi-scale features of an image and existing methods are susceptible to interference from image background information. Therefore, to address the above problems, this paper proposes an FGIC model (Attention-PCNN) based on hybrid attention mechanism and pyramidal convolution. The model feeds the multi-scale features extracted by the pyramidal convolutional neural network into two branches capturing global and local information respectively. In particular, a hybrid attention mechanism is added to the branch capturing global information in order to reduce the interference of image background information and make the model pay more attention to the target region with fine-grained features. In addition, the mutual-channel loss (MC-LOSS) is introduced in the local information branch to capture fine-grained features. We evaluated the model on three publicly available datasets CUB-200-2011, Stanford Cars, FGVC-Aircraft, etc. Compared to the state-of-the-art methods, the results show that Attention-PCNN performs better.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 3","pages":"1283-1293"},"PeriodicalIF":6.6000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10817763","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tsinghua Science and Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10817763/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0
Abstract
Finding more specific subcategories within a larger category is the goal of fine-grained image classification (FGIC), and the key is to find local discriminative regions of visual features. Most existing methods use traditional convolutional operations to achieve fine-grained image classification. However, traditional convolution cannot extract multi-scale features of an image and existing methods are susceptible to interference from image background information. Therefore, to address the above problems, this paper proposes an FGIC model (Attention-PCNN) based on hybrid attention mechanism and pyramidal convolution. The model feeds the multi-scale features extracted by the pyramidal convolutional neural network into two branches capturing global and local information respectively. In particular, a hybrid attention mechanism is added to the branch capturing global information in order to reduce the interference of image background information and make the model pay more attention to the target region with fine-grained features. In addition, the mutual-channel loss (MC-LOSS) is introduced in the local information branch to capture fine-grained features. We evaluated the model on three publicly available datasets CUB-200-2011, Stanford Cars, FGVC-Aircraft, etc. Compared to the state-of-the-art methods, the results show that Attention-PCNN performs better.
期刊介绍:
Tsinghua Science and Technology (Tsinghua Sci Technol) started publication in 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, electronic engineering, and other IT fields. Contributions all over the world are welcome.