{"title":"Dynamic attention guider network","authors":"Chunguang Yue, Jinbao Li, Qichen Wang, Donghuan Zhang","doi":"10.1007/s00607-024-01328-4","DOIUrl":null,"url":null,"abstract":"<p>Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 <span>\\(AP^b\\)</span> and 1.5 <span>\\(AP^m\\)</span>, while achieving a top performance of 48.2% in semantic segmentation on ADE20K.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"155 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00607-024-01328-4","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 \(AP^b\) and 1.5 \(AP^m\), while achieving a top performance of 48.2% in semantic segmentation on ADE20K.
期刊介绍:
Computing publishes original papers, short communications and surveys on all fields of computing. The contributions should be written in English and may be of theoretical or applied nature, the essential criteria are computational relevance and systematic foundation of results.