{"title":"医学图像分割注意机制研究进展","authors":"Jianpeng Zhang, Xiaomin Chen, Bing Yang, Qingbiao Guan, Qi Chen, Jian Chen, Qi Wu, Yutong Xie, Yong Xia","doi":"10.1016/j.cosrev.2024.100721","DOIUrl":null,"url":null,"abstract":"Medical image segmentation plays an important role in computer-aided diagnosis. Attention mechanisms that distinguish important parts from irrelevant parts have been widely used in medical image segmentation tasks. This paper systematically reviews the basic principles of attention mechanisms and their applications in medical image segmentation. First, we review the basic concepts of attention mechanism and formulation. Second, we surveyed about 200 articles related to medical image segmentation, and divided them into three groups based on their attention mechanisms, Pre-Transformer attention, Transformer attention and Mamba-related attention. In each group, we deeply analyze the attention mechanisms from three aspects based on the current literature work, <mml:math altimg=\"si1.svg\" display=\"inline\"><mml:mrow><mml:mi>i</mml:mi><mml:mo>.</mml:mo><mml:mi>e</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math>, the principle of the mechanism (what to use), implementation methods (how to use), and application tasks (where to use). We also thoroughly analyzed the advantages and limitations of their applications to different tasks. Finally, we summarize the current state of research and shortcomings in the field, and discuss the potential challenges in the future, including task specificity, robustness, standard evaluation, <ce:italic>etc</ce:italic>. We hope that this review can showcase the overall research context of traditional, Transformer and Mamba attention methods, provide a clear reference for subsequent research, and inspire more advanced attention research, not only in medical image segmentation, but also in other image analysis scenarios. Finally, we maintain the paper list and open-source code at <ce:inter-ref xlink:href=\"https://github.com/Ammexm/Medical-Image-Segmentation\" xlink:type=\"simple\">here</ce:inter-ref>.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"558 1","pages":""},"PeriodicalIF":13.3000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advances in attention mechanisms for medical image segmentation\",\"authors\":\"Jianpeng Zhang, Xiaomin Chen, Bing Yang, Qingbiao Guan, Qi Chen, Jian Chen, Qi Wu, Yutong Xie, Yong Xia\",\"doi\":\"10.1016/j.cosrev.2024.100721\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Medical image segmentation plays an important role in computer-aided diagnosis. Attention mechanisms that distinguish important parts from irrelevant parts have been widely used in medical image segmentation tasks. This paper systematically reviews the basic principles of attention mechanisms and their applications in medical image segmentation. First, we review the basic concepts of attention mechanism and formulation. Second, we surveyed about 200 articles related to medical image segmentation, and divided them into three groups based on their attention mechanisms, Pre-Transformer attention, Transformer attention and Mamba-related attention. In each group, we deeply analyze the attention mechanisms from three aspects based on the current literature work, <mml:math altimg=\\\"si1.svg\\\" display=\\\"inline\\\"><mml:mrow><mml:mi>i</mml:mi><mml:mo>.</mml:mo><mml:mi>e</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math>, the principle of the mechanism (what to use), implementation methods (how to use), and application tasks (where to use). We also thoroughly analyzed the advantages and limitations of their applications to different tasks. Finally, we summarize the current state of research and shortcomings in the field, and discuss the potential challenges in the future, including task specificity, robustness, standard evaluation, <ce:italic>etc</ce:italic>. We hope that this review can showcase the overall research context of traditional, Transformer and Mamba attention methods, provide a clear reference for subsequent research, and inspire more advanced attention research, not only in medical image segmentation, but also in other image analysis scenarios. Finally, we maintain the paper list and open-source code at <ce:inter-ref xlink:href=\\\"https://github.com/Ammexm/Medical-Image-Segmentation\\\" xlink:type=\\\"simple\\\">here</ce:inter-ref>.\",\"PeriodicalId\":48633,\"journal\":{\"name\":\"Computer Science Review\",\"volume\":\"558 1\",\"pages\":\"\"},\"PeriodicalIF\":13.3000,\"publicationDate\":\"2025-01-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Science Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.cosrev.2024.100721\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science Review","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.cosrev.2024.100721","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Advances in attention mechanisms for medical image segmentation
Medical image segmentation plays an important role in computer-aided diagnosis. Attention mechanisms that distinguish important parts from irrelevant parts have been widely used in medical image segmentation tasks. This paper systematically reviews the basic principles of attention mechanisms and their applications in medical image segmentation. First, we review the basic concepts of attention mechanism and formulation. Second, we surveyed about 200 articles related to medical image segmentation, and divided them into three groups based on their attention mechanisms, Pre-Transformer attention, Transformer attention and Mamba-related attention. In each group, we deeply analyze the attention mechanisms from three aspects based on the current literature work, i.e., the principle of the mechanism (what to use), implementation methods (how to use), and application tasks (where to use). We also thoroughly analyzed the advantages and limitations of their applications to different tasks. Finally, we summarize the current state of research and shortcomings in the field, and discuss the potential challenges in the future, including task specificity, robustness, standard evaluation, etc. We hope that this review can showcase the overall research context of traditional, Transformer and Mamba attention methods, provide a clear reference for subsequent research, and inspire more advanced attention research, not only in medical image segmentation, but also in other image analysis scenarios. Finally, we maintain the paper list and open-source code at here.
期刊介绍:
Computer Science Review, a publication dedicated to research surveys and expository overviews of open problems in computer science, targets a broad audience within the field seeking comprehensive insights into the latest developments. The journal welcomes articles from various fields as long as their content impacts the advancement of computer science. In particular, articles that review the application of well-known Computer Science methods to other areas are in scope only if these articles advance the fundamental understanding of those methods.