对比和增益意识注意:即插即用的躯干区域胎儿平面识别特征融合注意模块。

IF 2.6 3区 医学 Q2 ACOUSTICS
Shengjun Zhu , Jiaxin Cai , Runqing Xiong , Liping Zheng , Yang Chen , Duo Ma
{"title":"对比和增益意识注意:即插即用的躯干区域胎儿平面识别特征融合注意模块。","authors":"Shengjun Zhu ,&nbsp;Jiaxin Cai ,&nbsp;Runqing Xiong ,&nbsp;Liping Zheng ,&nbsp;Yang Chen ,&nbsp;Duo Ma","doi":"10.1016/j.ultrasmedbio.2025.08.014","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate identification of fetal torso ultrasound planes is essential in pre-natal examinations, as it plays a critical role in the early detection of severe fetal malformations and this process is heavily dependent on the clinical expertise of health care providers. However, the limited number of medical professionals skilled at identification and the complexity of fetal plane screening underscore the need for efficient diagnostic support tools. Clinicians often encounter challenges such as image artifacts and the intricate nature of fetal planes, which require adjustments to image gain and contrast to obtain clearer diagnostic information. In response to these challenges, we propose the contrast and gain-aware attention mechanism. This method generates images under varying gain and contrast conditions, and utilizes an attention mechanism to mimic the clinician’s decision-making process. The system dynamically allocates attention to images based on these conditions, integrating feature fusion through a lightweight attention module. Positioned in the first layer of the model, this module operates directly on images with different gain and contrast settings. Here we integrated this attention mechanism into ResNet18 and ResNet34 models to predict key fetal torso planes: the transverse view of the abdomen, the sagittal view of the spine, the transverse view of the kidney and the sagittal view of the kidney. Our experimental results showed that this approach significantly enhances performance compared with traditional models, with minimal addition to model parameters, ensuring both efficiency and effectiveness in fetal torso ultrasound plane identification. Our codes are available at <span><span>https://github.com/sysll/CCGAA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49399,"journal":{"name":"Ultrasound in Medicine and Biology","volume":"51 12","pages":"Pages 2258-2266"},"PeriodicalIF":2.6000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Contrast and Gain-Aware Attention: A Plug-and-Play Feature Fusion Attention Module for Torso Region Fetal Plane Identification\",\"authors\":\"Shengjun Zhu ,&nbsp;Jiaxin Cai ,&nbsp;Runqing Xiong ,&nbsp;Liping Zheng ,&nbsp;Yang Chen ,&nbsp;Duo Ma\",\"doi\":\"10.1016/j.ultrasmedbio.2025.08.014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Accurate identification of fetal torso ultrasound planes is essential in pre-natal examinations, as it plays a critical role in the early detection of severe fetal malformations and this process is heavily dependent on the clinical expertise of health care providers. However, the limited number of medical professionals skilled at identification and the complexity of fetal plane screening underscore the need for efficient diagnostic support tools. Clinicians often encounter challenges such as image artifacts and the intricate nature of fetal planes, which require adjustments to image gain and contrast to obtain clearer diagnostic information. In response to these challenges, we propose the contrast and gain-aware attention mechanism. This method generates images under varying gain and contrast conditions, and utilizes an attention mechanism to mimic the clinician’s decision-making process. The system dynamically allocates attention to images based on these conditions, integrating feature fusion through a lightweight attention module. Positioned in the first layer of the model, this module operates directly on images with different gain and contrast settings. Here we integrated this attention mechanism into ResNet18 and ResNet34 models to predict key fetal torso planes: the transverse view of the abdomen, the sagittal view of the spine, the transverse view of the kidney and the sagittal view of the kidney. Our experimental results showed that this approach significantly enhances performance compared with traditional models, with minimal addition to model parameters, ensuring both efficiency and effectiveness in fetal torso ultrasound plane identification. Our codes are available at <span><span>https://github.com/sysll/CCGAA</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49399,\"journal\":{\"name\":\"Ultrasound in Medicine and Biology\",\"volume\":\"51 12\",\"pages\":\"Pages 2258-2266\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ultrasound in Medicine and Biology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S030156292500328X\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasound in Medicine and Biology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S030156292500328X","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

摘要

胎儿躯干超声平面的准确识别在产前检查中至关重要,因为它在早期发现严重胎儿畸形中起着至关重要的作用,这一过程在很大程度上依赖于卫生保健提供者的临床专业知识。然而,熟练识别的医疗专业人员数量有限,以及胎儿平面筛查的复杂性,强调了对有效诊断支持工具的需求。临床医生经常遇到的挑战,如图像伪影和胎儿平面的复杂性质,这需要调整图像增益和对比度,以获得更清晰的诊断信息。针对这些挑战,我们提出了对比和获得意识注意机制。该方法在不同的增益和对比度条件下生成图像,并利用注意机制来模拟临床医生的决策过程。系统根据这些条件动态分配对图像的关注,并通过轻量级关注模块进行特征融合。该模块位于模型的第一层,直接对不同增益和对比度设置的图像进行操作。在这里,我们将这种注意机制整合到ResNet18和ResNet34模型中,以预测胎儿躯干的关键平面:腹部的横向视图、脊柱的矢状视图、肾脏的横向视图和肾脏的矢状视图。实验结果表明,该方法与传统模型相比,在最小的模型参数添加下,显著提高了性能,保证了胎儿躯干超声平面识别的效率和有效性。我们的代码可在https://github.com/sysll/CCGAA上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Contrast and Gain-Aware Attention: A Plug-and-Play Feature Fusion Attention Module for Torso Region Fetal Plane Identification
Accurate identification of fetal torso ultrasound planes is essential in pre-natal examinations, as it plays a critical role in the early detection of severe fetal malformations and this process is heavily dependent on the clinical expertise of health care providers. However, the limited number of medical professionals skilled at identification and the complexity of fetal plane screening underscore the need for efficient diagnostic support tools. Clinicians often encounter challenges such as image artifacts and the intricate nature of fetal planes, which require adjustments to image gain and contrast to obtain clearer diagnostic information. In response to these challenges, we propose the contrast and gain-aware attention mechanism. This method generates images under varying gain and contrast conditions, and utilizes an attention mechanism to mimic the clinician’s decision-making process. The system dynamically allocates attention to images based on these conditions, integrating feature fusion through a lightweight attention module. Positioned in the first layer of the model, this module operates directly on images with different gain and contrast settings. Here we integrated this attention mechanism into ResNet18 and ResNet34 models to predict key fetal torso planes: the transverse view of the abdomen, the sagittal view of the spine, the transverse view of the kidney and the sagittal view of the kidney. Our experimental results showed that this approach significantly enhances performance compared with traditional models, with minimal addition to model parameters, ensuring both efficiency and effectiveness in fetal torso ultrasound plane identification. Our codes are available at https://github.com/sysll/CCGAA.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
6.90%
发文量
325
审稿时长
70 days
期刊介绍: Ultrasound in Medicine and Biology is the official journal of the World Federation for Ultrasound in Medicine and Biology. The journal publishes original contributions that demonstrate a novel application of an existing ultrasound technology in clinical diagnostic, interventional and therapeutic applications, new and improved clinical techniques, the physics, engineering and technology of ultrasound in medicine and biology, and the interactions between ultrasound and biological systems, including bioeffects. Papers that simply utilize standard diagnostic ultrasound as a measuring tool will be considered out of scope. Extended critical reviews of subjects of contemporary interest in the field are also published, in addition to occasional editorial articles, clinical and technical notes, book reviews, letters to the editor and a calendar of forthcoming meetings. It is the aim of the journal fully to meet the information and publication requirements of the clinicians, scientists, engineers and other professionals who constitute the biomedical ultrasonic community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信