{"title":"DAMAF: dual attention network with multi-level adaptive complementary fusion for medical image segmentation","authors":"Yueqian Pan, Qiaohong Chen, Xian Fang","doi":"10.1007/s00371-024-03543-8","DOIUrl":null,"url":null,"abstract":"<p>Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03543-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.