{"title":"DECF-FGVC:一种细粒度鸟类视觉分类的判别增强和互补融合方法","authors":"ShuaiShuai Deng , Tianhua Chen , Qinghua Qiao","doi":"10.1016/j.imavis.2025.105744","DOIUrl":null,"url":null,"abstract":"<div><div>Fine-grained bird image recognition plays a critical role in species conservation. However, existing approaches are constrained by complex background interference, insufficient extraction of discriminative features, and limited integration of hierarchical information. While Vision Transformers (ViTs) demonstrate superior performance over CNNs in fine-grained classification tasks, they remain vulnerable to background noise, with class tokens often failing to capture key regions - overlooking the complementarity between low-level details and high-level semantics. This study proposes DECF-FGVC, a novel model incorporating three modules: Patch Contrast Enhancement (PCE), Contrast Token Refiner (CTR), and Hierarchical Token Synthesizer (HTS). These modules synergistically suppress background noise, emphasize key regions, and integrate multi-layer features through attention-weighted image reconstruction, counterfactual learning-based token refinement, and hierarchical token fusion. Extensive experiments on CUB-200-2011, NABirds, and iNaturalist2017 datasets achieve classification accuracies of 91.9%, 91.4%, and 77.92% respectively, consistently outperforming state-of-the-art methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"163 ","pages":"Article 105744"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DECF-FGVC: A discriminative enhancement and complementary fusion approach for fine-grained bird visual classification\",\"authors\":\"ShuaiShuai Deng , Tianhua Chen , Qinghua Qiao\",\"doi\":\"10.1016/j.imavis.2025.105744\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Fine-grained bird image recognition plays a critical role in species conservation. However, existing approaches are constrained by complex background interference, insufficient extraction of discriminative features, and limited integration of hierarchical information. While Vision Transformers (ViTs) demonstrate superior performance over CNNs in fine-grained classification tasks, they remain vulnerable to background noise, with class tokens often failing to capture key regions - overlooking the complementarity between low-level details and high-level semantics. This study proposes DECF-FGVC, a novel model incorporating three modules: Patch Contrast Enhancement (PCE), Contrast Token Refiner (CTR), and Hierarchical Token Synthesizer (HTS). These modules synergistically suppress background noise, emphasize key regions, and integrate multi-layer features through attention-weighted image reconstruction, counterfactual learning-based token refinement, and hierarchical token fusion. Extensive experiments on CUB-200-2011, NABirds, and iNaturalist2017 datasets achieve classification accuracies of 91.9%, 91.4%, and 77.92% respectively, consistently outperforming state-of-the-art methods.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"163 \",\"pages\":\"Article 105744\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625003324\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625003324","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
DECF-FGVC: A discriminative enhancement and complementary fusion approach for fine-grained bird visual classification
Fine-grained bird image recognition plays a critical role in species conservation. However, existing approaches are constrained by complex background interference, insufficient extraction of discriminative features, and limited integration of hierarchical information. While Vision Transformers (ViTs) demonstrate superior performance over CNNs in fine-grained classification tasks, they remain vulnerable to background noise, with class tokens often failing to capture key regions - overlooking the complementarity between low-level details and high-level semantics. This study proposes DECF-FGVC, a novel model incorporating three modules: Patch Contrast Enhancement (PCE), Contrast Token Refiner (CTR), and Hierarchical Token Synthesizer (HTS). These modules synergistically suppress background noise, emphasize key regions, and integrate multi-layer features through attention-weighted image reconstruction, counterfactual learning-based token refinement, and hierarchical token fusion. Extensive experiments on CUB-200-2011, NABirds, and iNaturalist2017 datasets achieve classification accuracies of 91.9%, 91.4%, and 77.92% respectively, consistently outperforming state-of-the-art methods.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.