Improving deep learning U-Net++ by discrete wavelet and attention gate mechanisms for effective pathological lung segmentation in chest X-ray imaging.

IF 2.4 4区 医学 Q3 ENGINEERING, BIOMEDICAL
Faiçal Alaoui Abdalaoui Slimani, M'hamed Bentourkia
{"title":"Improving deep learning U-Net++ by discrete wavelet and attention gate mechanisms for effective pathological lung segmentation in chest X-ray imaging.","authors":"Faiçal Alaoui Abdalaoui Slimani, M'hamed Bentourkia","doi":"10.1007/s13246-024-01489-8","DOIUrl":null,"url":null,"abstract":"<p><p>Since its introduction in 2015, the U-Net architecture used in Deep Learning has played a crucial role in medical imaging. Recognized for its ability to accurately discriminate small structures, the U-Net has received more than 2600 citations in academic literature, which motivated continuous enhancements to its architecture. In hospitals, chest radiography is the primary diagnostic method for pulmonary disorders, however, accurate lung segmentation in chest X-ray images remains a challenging task, primarily due to the significant variations in lung shapes and the presence of intense opacities caused by various diseases. This article introduces a new approach for the segmentation of lung X-ray images. Traditional max-pooling operations, commonly employed in conventional U-Net++ models, were replaced with the discrete wavelet transform (DWT), offering a more accurate down-sampling technique that potentially captures detailed features of lung structures. Additionally, we used attention gate (AG) mechanisms that enable the model to focus on specific regions in the input image, which improves the accuracy of the segmentation process. When compared with current techniques like Atrous Convolutions, Improved FCN, Improved SegNet, U-Net, and U-Net++, our method (U-Net++-DWT) showed remarkable efficacy, particularly on the Japanese Society of Radiological Technology dataset, achieving an accuracy of 99.1%, specificity of 98.9%, sensitivity of 97.8%, Dice Coefficient of 97.2%, and Jaccard Index of 96.3%. Its performance on the Montgomery County dataset further demonstrated its consistent effectiveness. Moreover, when applied to additional datasets of Chest X-ray Masks and Labels and COVID-19, our method maintained high performance levels, achieving up to 99.3% accuracy, thereby underscoring its adaptability and potential for broad applications in medical imaging diagnostics.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical and Engineering Sciences in Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s13246-024-01489-8","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Since its introduction in 2015, the U-Net architecture used in Deep Learning has played a crucial role in medical imaging. Recognized for its ability to accurately discriminate small structures, the U-Net has received more than 2600 citations in academic literature, which motivated continuous enhancements to its architecture. In hospitals, chest radiography is the primary diagnostic method for pulmonary disorders, however, accurate lung segmentation in chest X-ray images remains a challenging task, primarily due to the significant variations in lung shapes and the presence of intense opacities caused by various diseases. This article introduces a new approach for the segmentation of lung X-ray images. Traditional max-pooling operations, commonly employed in conventional U-Net++ models, were replaced with the discrete wavelet transform (DWT), offering a more accurate down-sampling technique that potentially captures detailed features of lung structures. Additionally, we used attention gate (AG) mechanisms that enable the model to focus on specific regions in the input image, which improves the accuracy of the segmentation process. When compared with current techniques like Atrous Convolutions, Improved FCN, Improved SegNet, U-Net, and U-Net++, our method (U-Net++-DWT) showed remarkable efficacy, particularly on the Japanese Society of Radiological Technology dataset, achieving an accuracy of 99.1%, specificity of 98.9%, sensitivity of 97.8%, Dice Coefficient of 97.2%, and Jaccard Index of 96.3%. Its performance on the Montgomery County dataset further demonstrated its consistent effectiveness. Moreover, when applied to additional datasets of Chest X-ray Masks and Labels and COVID-19, our method maintained high performance levels, achieving up to 99.3% accuracy, thereby underscoring its adaptability and potential for broad applications in medical imaging diagnostics.

利用离散小波和注意门机制改进深度学习 U-Net++,从而在胸部 X 射线成像中有效进行病理肺分割。
深度学习中使用的 U-Net 架构自 2015 年推出以来,在医学成像领域发挥了至关重要的作用。U-Net 因其准确分辨小结构的能力而备受认可,在学术文献中获得了 2600 多次引用,这也促使其架构不断得到改进。在医院中,胸部放射摄影是肺部疾病的主要诊断方法,然而,胸部 X 光图像中肺部的准确分割仍然是一项具有挑战性的任务,这主要是由于肺部形状的显著变化和各种疾病导致的强烈不透明的存在。本文介绍了一种新的肺部 X 光图像分割方法。传统 U-Net++ 模型中常用的传统最大池化操作被离散小波变换(DWT)所取代,DWT 提供了一种更精确的下采样技术,有可能捕捉到肺部结构的细节特征。此外,我们还使用了注意力门(AG)机制,使模型能够聚焦于输入图像中的特定区域,从而提高了分割过程的准确性。与目前的 Atrous Convolutions、Improved FCN、Improved SegNet、U-Net 和 U-Net++ 等技术相比,我们的方法(U-Net++-DWT)效果显著,尤其是在日本放射学会数据集上,准确率达到 99.1%,特异性达到 98.9%,灵敏度达到 97.8%,Dice 系数达到 97.2%,Jaccard 指数达到 96.3%。它在蒙哥马利县数据集上的表现进一步证明了其一贯的有效性。此外,当我们将该方法应用于胸部 X 光面罩和标签以及 COVID-19 等其他数据集时,它仍保持了较高的性能水平,准确率高达 99.3%,从而突出了该方法在医学影像诊断领域的适应性和广泛应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.40
自引率
4.50%
发文量
110
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信