Nixson Okila, Andrew Katumba, Joyce Nakatumba-Nabende, Cosmas Mwikirize, Sudi Murindanyi, Jonathan Serugunda, Samuel Bugeza, Anthony Oriekot, Juliet Bossa, Eva Nabawanuka
{"title":"Deep learning for accurate B-line detection and localization in lung ultrasound imaging.","authors":"Nixson Okila, Andrew Katumba, Joyce Nakatumba-Nabende, Cosmas Mwikirize, Sudi Murindanyi, Jonathan Serugunda, Samuel Bugeza, Anthony Oriekot, Juliet Bossa, Eva Nabawanuka","doi":"10.3389/frai.2025.1560523","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Lung ultrasound (LUS) has become an essential imaging modality for assessing various pulmonary conditions, including the presence of B-line artifacts. These artifacts are commonly associated with conditions such as increased extravascular lung water, decompensated heart failure, dialysis-related chronic kidney disease, interstitial lung disease, and COVID-19 pneumonia. Accurate detection of the B-line in LUS images is crucial for effective diagnosis and treatment. However, interpreting LUS is often subject to observer variability, requiring significant expertise and posing challenges in resource-limited settings with few trained professionals.</p><p><strong>Methods: </strong>To address these limitations, deep learning models have been developed for automated B-line detection and localization. This study introduces YOLOv5-PBB and YOLOv8-PBB, two modified models based on YOLOv5 and YOLOv8, respectively, designed for precise and interpretable B-line localization using polygonal bounding boxes (PBBs). YOLOv5-PBB was enhanced by modifying the detection head, loss function, non-maximum suppression, and data loader to enable PBB localization. YOLOv8-PBB was customized to convert segmentation masks into polygonal representations, displaying only boundaries while removing the masks. Additionally, an image preprocessing technique was incorporated into the models to enhance LUS image quality. The models were trained on a diverse dataset from a publicly available repository and Ugandan health facilities.</p><p><strong>Results: </strong>Experimental results showed that YOLOv8-PBB achieved the highest precision (0.947), recall (0.926), and mean average precision (0.957). YOLOv5-PBB, while slightly lower in performance (precision: 0.931, recall: 0.918, mAP: 0.936), had advantages in model size (14 MB vs. 21 MB) and average inference time (33.1 ms vs. 47.7 ms), making it more suitable for real-time applications in low-resource settings.</p><p><strong>Discussion: </strong>The integration of these models into a mobile LUS screening tool provides a promising solution for B-line localization in resource-limited settings, where accessibility to trained professionals may be scarce. The YOLOv5-PBB and YOLOv8-PBB models offer high performance while addressing challenges related to inference speed and model size, making them ideal candidates for mobile deployment in such environments.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1560523"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12053239/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1560523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Lung ultrasound (LUS) has become an essential imaging modality for assessing various pulmonary conditions, including the presence of B-line artifacts. These artifacts are commonly associated with conditions such as increased extravascular lung water, decompensated heart failure, dialysis-related chronic kidney disease, interstitial lung disease, and COVID-19 pneumonia. Accurate detection of the B-line in LUS images is crucial for effective diagnosis and treatment. However, interpreting LUS is often subject to observer variability, requiring significant expertise and posing challenges in resource-limited settings with few trained professionals.
Methods: To address these limitations, deep learning models have been developed for automated B-line detection and localization. This study introduces YOLOv5-PBB and YOLOv8-PBB, two modified models based on YOLOv5 and YOLOv8, respectively, designed for precise and interpretable B-line localization using polygonal bounding boxes (PBBs). YOLOv5-PBB was enhanced by modifying the detection head, loss function, non-maximum suppression, and data loader to enable PBB localization. YOLOv8-PBB was customized to convert segmentation masks into polygonal representations, displaying only boundaries while removing the masks. Additionally, an image preprocessing technique was incorporated into the models to enhance LUS image quality. The models were trained on a diverse dataset from a publicly available repository and Ugandan health facilities.
Results: Experimental results showed that YOLOv8-PBB achieved the highest precision (0.947), recall (0.926), and mean average precision (0.957). YOLOv5-PBB, while slightly lower in performance (precision: 0.931, recall: 0.918, mAP: 0.936), had advantages in model size (14 MB vs. 21 MB) and average inference time (33.1 ms vs. 47.7 ms), making it more suitable for real-time applications in low-resource settings.
Discussion: The integration of these models into a mobile LUS screening tool provides a promising solution for B-line localization in resource-limited settings, where accessibility to trained professionals may be scarce. The YOLOv5-PBB and YOLOv8-PBB models offer high performance while addressing challenges related to inference speed and model size, making them ideal candidates for mobile deployment in such environments.
肺超声(LUS)已经成为评估各种肺部疾病的基本成像方式,包括b线伪影的存在。这些伪像通常与诸如血管外肺水增加、失代偿性心力衰竭、透析相关慢性肾病、间质性肺疾病和COVID-19肺炎等疾病有关。准确检测LUS图像中的b线对于有效诊断和治疗至关重要。然而,解释LUS往往受制于观察者的可变性,需要大量的专业知识,并且在资源有限的环境中,训练有素的专业人员很少,这带来了挑战。方法:为了解决这些限制,深度学习模型已经被开发用于自动b线检测和定位。本文介绍了分别在YOLOv5和YOLOv8基础上改进的YOLOv5- pbb和YOLOv8- pbb两种基于多边形边界框(PBBs)的精确可解释b线定位模型。YOLOv5-PBB通过修改检测头、损失函数、非最大抑制和数据加载器来增强PBB定位。YOLOv8-PBB被定制为将分割掩码转换为多边形表示,在移除掩码时仅显示边界。此外,在模型中加入了图像预处理技术,以提高LUS图像质量。这些模型是在来自一个公开存储库和乌干达卫生设施的不同数据集上进行训练的。结果:实验结果表明,YOLOv8-PBB具有最高的精密度(0.947)、召回率(0.926)和平均精密度(0.957)。YOLOv5-PBB虽然性能稍低(精度:0.931,召回率:0.918,mAP: 0.936),但在模型大小(14 MB vs. 21 MB)和平均推理时间(33.1 ms vs. 47.7 ms)方面具有优势,使其更适合低资源环境下的实时应用。讨论:将这些模型集成到移动LUS筛查工具中,为资源有限的b线定位提供了一个有希望的解决方案,在这些环境中,训练有素的专业人员可能很少。YOLOv5-PBB和YOLOv8-PBB模型提供高性能,同时解决了与推理速度和模型大小相关的挑战,使其成为此类环境中移动部署的理想选择。