Shengjun Zhu , Jiaxin Cai , Runqing Xiong , Liping Zheng , Yang Chen , Duo Ma
{"title":"Contrast and Gain-Aware Attention: A Plug-and-Play Feature Fusion Attention Module for Torso Region Fetal Plane Identification","authors":"Shengjun Zhu , Jiaxin Cai , Runqing Xiong , Liping Zheng , Yang Chen , Duo Ma","doi":"10.1016/j.ultrasmedbio.2025.08.014","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate identification of fetal torso ultrasound planes is essential in pre-natal examinations, as it plays a critical role in the early detection of severe fetal malformations and this process is heavily dependent on the clinical expertise of health care providers. However, the limited number of medical professionals skilled at identification and the complexity of fetal plane screening underscore the need for efficient diagnostic support tools. Clinicians often encounter challenges such as image artifacts and the intricate nature of fetal planes, which require adjustments to image gain and contrast to obtain clearer diagnostic information. In response to these challenges, we propose the contrast and gain-aware attention mechanism. This method generates images under varying gain and contrast conditions, and utilizes an attention mechanism to mimic the clinician’s decision-making process. The system dynamically allocates attention to images based on these conditions, integrating feature fusion through a lightweight attention module. Positioned in the first layer of the model, this module operates directly on images with different gain and contrast settings. Here we integrated this attention mechanism into ResNet18 and ResNet34 models to predict key fetal torso planes: the transverse view of the abdomen, the sagittal view of the spine, the transverse view of the kidney and the sagittal view of the kidney. Our experimental results showed that this approach significantly enhances performance compared with traditional models, with minimal addition to model parameters, ensuring both efficiency and effectiveness in fetal torso ultrasound plane identification. Our codes are available at <span><span>https://github.com/sysll/CCGAA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49399,"journal":{"name":"Ultrasound in Medicine and Biology","volume":"51 12","pages":"Pages 2258-2266"},"PeriodicalIF":2.6000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasound in Medicine and Biology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S030156292500328X","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate identification of fetal torso ultrasound planes is essential in pre-natal examinations, as it plays a critical role in the early detection of severe fetal malformations and this process is heavily dependent on the clinical expertise of health care providers. However, the limited number of medical professionals skilled at identification and the complexity of fetal plane screening underscore the need for efficient diagnostic support tools. Clinicians often encounter challenges such as image artifacts and the intricate nature of fetal planes, which require adjustments to image gain and contrast to obtain clearer diagnostic information. In response to these challenges, we propose the contrast and gain-aware attention mechanism. This method generates images under varying gain and contrast conditions, and utilizes an attention mechanism to mimic the clinician’s decision-making process. The system dynamically allocates attention to images based on these conditions, integrating feature fusion through a lightweight attention module. Positioned in the first layer of the model, this module operates directly on images with different gain and contrast settings. Here we integrated this attention mechanism into ResNet18 and ResNet34 models to predict key fetal torso planes: the transverse view of the abdomen, the sagittal view of the spine, the transverse view of the kidney and the sagittal view of the kidney. Our experimental results showed that this approach significantly enhances performance compared with traditional models, with minimal addition to model parameters, ensuring both efficiency and effectiveness in fetal torso ultrasound plane identification. Our codes are available at https://github.com/sysll/CCGAA.
期刊介绍:
Ultrasound in Medicine and Biology is the official journal of the World Federation for Ultrasound in Medicine and Biology. The journal publishes original contributions that demonstrate a novel application of an existing ultrasound technology in clinical diagnostic, interventional and therapeutic applications, new and improved clinical techniques, the physics, engineering and technology of ultrasound in medicine and biology, and the interactions between ultrasound and biological systems, including bioeffects. Papers that simply utilize standard diagnostic ultrasound as a measuring tool will be considered out of scope. Extended critical reviews of subjects of contemporary interest in the field are also published, in addition to occasional editorial articles, clinical and technical notes, book reviews, letters to the editor and a calendar of forthcoming meetings. It is the aim of the journal fully to meet the information and publication requirements of the clinicians, scientists, engineers and other professionals who constitute the biomedical ultrasonic community.