Jinjing Wu , Wenhui Guo , Zhanheng Chen , Huixiu Hu , Houfeng Li , Ying Zhang , Jing Huang , Long Liu , Zhenghao Xu , Tianying Xu , Miao Zhou , Chenglong Zhu , Haipo Cui , Wenyun Xu , Zui Zou
{"title":"A segmentation network based on CNNs for identifying laryngeal structures in video laryngoscope images","authors":"Jinjing Wu , Wenhui Guo , Zhanheng Chen , Huixiu Hu , Houfeng Li , Ying Zhang , Jing Huang , Long Liu , Zhenghao Xu , Tianying Xu , Miao Zhou , Chenglong Zhu , Haipo Cui , Wenyun Xu , Zui Zou","doi":"10.1016/j.compmedimag.2025.102573","DOIUrl":null,"url":null,"abstract":"<div><div>Video laryngoscopes have become increasingly vital in tracheal intubation, providing clear imaging that significantly improves success rates, especially for less experienced clinicians. However, accurate recognition of laryngeal structures remains challenging, which is critical for successful first-attempt intubation in emergency situations. This paper presents MPE-UNet, a deep learning model designed for precise segmentation of laryngeal structures from video laryngoscope images, aiming to assist clinicians in performing tracheal intubation more accurately and efficiently. MPE-UNet follows the classic U-Net architecture, which features an encoder–decoder structure and enhances it with advanced modules and innovative techniques at every stage. In the encoder, we designed an improved multi-scale feature extraction module, which better processes complex throat images. Additionally, a pyramid fusion attention module was incorporated into the skip connections, enhancing the model’s ability to capture details by dynamically weighting and merging features from different levels. Moreover, a plug-and-play attention mechanism module was integrated into the decoder, further refining the segmentation process by focusing on important features. The experimental results show that the performance of the proposed method outperforms state-of-the-art methods.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102573"},"PeriodicalIF":5.4000,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000825","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Video laryngoscopes have become increasingly vital in tracheal intubation, providing clear imaging that significantly improves success rates, especially for less experienced clinicians. However, accurate recognition of laryngeal structures remains challenging, which is critical for successful first-attempt intubation in emergency situations. This paper presents MPE-UNet, a deep learning model designed for precise segmentation of laryngeal structures from video laryngoscope images, aiming to assist clinicians in performing tracheal intubation more accurately and efficiently. MPE-UNet follows the classic U-Net architecture, which features an encoder–decoder structure and enhances it with advanced modules and innovative techniques at every stage. In the encoder, we designed an improved multi-scale feature extraction module, which better processes complex throat images. Additionally, a pyramid fusion attention module was incorporated into the skip connections, enhancing the model’s ability to capture details by dynamically weighting and merging features from different levels. Moreover, a plug-and-play attention mechanism module was integrated into the decoder, further refining the segmentation process by focusing on important features. The experimental results show that the performance of the proposed method outperforms state-of-the-art methods.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.