Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning.
IF 8.1
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Tina Dorosti, Manuel Schultheiss, Philipp Schmette, Jule Heuchert, Johannes Thalhammer, Florian T Gassert, Thorsten Sellerer, Rafael Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Florian Schaff, Franz Pfeiffer, Daniela Pfeiffer
求助PDF
{"title":"Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning.","authors":"Tina Dorosti, Manuel Schultheiss, Philipp Schmette, Jule Heuchert, Johannes Thalhammer, Florian T Gassert, Thorsten Sellerer, Rafael Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Florian Schaff, Franz Pfeiffer, Daniela Pfeiffer","doi":"10.1148/ryai.240484","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To estimate the total lung volume (TLV) from real and synthetic frontal chest radiographs (CXR) on a pixel level using lung thickness maps generated by a U-Net deep learning model. Materials and Methods This retrospective study included 5,959 chest CT scans from two public datasets: the lung nodule analysis 2016 (<i>n</i> = 656) and the Radiological Society of North America (RSNA) pulmonary embolism detection challenge 2020 (<i>n</i> = 5,303). Additionally, 72 participants were selected from the Klinikum Rechts der Isar dataset (October 2018 to December 2019), each with a corresponding chest radiograph taken within seven days. Synthetic radiographs and lung thickness maps were generated using forward projection of CT scans and their lung segmentations. A U-Net model was trained on synthetic radiographs to predict lung thickness maps and estimate TLV. Model performance was assessed using mean squared error (MSE), Pearson correlation coefficient <b>(r)</b>, and two-sided Student's t-distribution. Results The study included 72 participants (45 male, 27 female, 33 healthy: mean age 62 years [range 34-80]; 39 with chronic obstructive pulmonary disease: mean age 69 years [range 47-91]). TLV predictions showed low error rates (MSEPublic-Synthetic = 0.16 L<sup>2</sup>, MSEKRI-Synthetic = 0.20 L<sup>2</sup>, MSEKRI-Real = 0.35 L<sup>2</sup>) and strong correlations with CT-derived reference standard TLV (nPublic-Synthetic = 1,191, r = 0.99, <i>P</i> < .001; nKRI-Synthetic = 72, r = 0.97, <i>P</i> < .001; nKRI-Real = 72, r = 0.91, <i>P</i> < .001). When evaluated on different datasets, the U-Net model achieved the highest performance for TLV estimation on the Luna16 test dataset, with the lowest mean squared error (MSE = 0.09 L<sup>2</sup>) and strongest correlation (<i>r</i> = 0.99, <i>P</i> <.001) compared with CT-derived TLV. Conclusion The U-Net-generated pixel-level lung thickness maps successfully estimated TLV for both synthetic and real radiographs. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240484"},"PeriodicalIF":8.1000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用
Abstract
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence . This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To estimate the total lung volume (TLV) from real and synthetic frontal chest radiographs (CXR) on a pixel level using lung thickness maps generated by a U-Net deep learning model. Materials and Methods This retrospective study included 5,959 chest CT scans from two public datasets: the lung nodule analysis 2016 (n = 656) and the Radiological Society of North America (RSNA) pulmonary embolism detection challenge 2020 (n = 5,303). Additionally, 72 participants were selected from the Klinikum Rechts der Isar dataset (October 2018 to December 2019), each with a corresponding chest radiograph taken within seven days. Synthetic radiographs and lung thickness maps were generated using forward projection of CT scans and their lung segmentations. A U-Net model was trained on synthetic radiographs to predict lung thickness maps and estimate TLV. Model performance was assessed using mean squared error (MSE), Pearson correlation coefficient (r) , and two-sided Student's t-distribution. Results The study included 72 participants (45 male, 27 female, 33 healthy: mean age 62 years [range 34-80]; 39 with chronic obstructive pulmonary disease: mean age 69 years [range 47-91]). TLV predictions showed low error rates (MSEPublic-Synthetic = 0.16 L2 , MSEKRI-Synthetic = 0.20 L2 , MSEKRI-Real = 0.35 L2 ) and strong correlations with CT-derived reference standard TLV (nPublic-Synthetic = 1,191, r = 0.99, P < .001; nKRI-Synthetic = 72, r = 0.97, P < .001; nKRI-Real = 72, r = 0.91, P < .001). When evaluated on different datasets, the U-Net model achieved the highest performance for TLV estimation on the Luna16 test dataset, with the lowest mean squared error (MSE = 0.09 L2 ) and strongest correlation (r = 0.99, P <.001) compared with CT-derived TLV. Conclusion The U-Net-generated pixel-level lung thickness maps successfully estimated TLV for both synthetic and real radiographs. ©RSNA, 2025.
利用深度学习从胸片像素级厚度图估计总肺容量。
“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的利用U-Net深度学习模型生成的肺厚度图,在像素水平上估计真实和合成胸部x线片(CXR)的肺总容积(TLV)。本回顾性研究包括来自两个公共数据集的5959个胸部CT扫描:2016年肺结节分析(n = 656)和北美放射学会(RSNA) 2020年肺栓塞检测挑战(n = 5303)。此外,从Klinikum Rechts der Isar数据集(2018年10月至2019年12月)中选择72名参与者,每个参与者在7天内拍摄相应的胸片。合成x线片和肺厚度图是利用CT扫描的正投影及其肺分割生成的。在合成x线片上训练U-Net模型来预测肺厚度图和估计TLV。采用均方误差(MSE)、Pearson相关系数(r)和双侧Student's t分布评估模型性能。结果共纳入72例受试者,其中男45例,女27例,健康33例,平均年龄62岁[范围34 ~ 80岁];39例慢性阻塞性肺疾病:平均年龄69岁[范围47-91])。TLV预测的误差率低(MSEPublic-Synthetic = 0.16 L2, MSEKRI-Synthetic = 0.20 L2, MSEKRI-Real = 0.35 L2),与ct衍生的参考标准TLV有很强的相关性(nPublic-Synthetic = 1191, r = 0.99, P < .001;nKRI-Synthetic = 72, r = 0.97, P < 0.001;nKRI-Real = 72, r = 0.91, P < 0.001)。当在不同的数据集上进行评估时,U-Net模型在Luna16测试数据集上获得了最高的TLV估计性能,均方误差最小(MSE = 0.09 L2),相关性最强(r = 0.99, P
本文章由计算机程序翻译,如有差异,请以英文原文为准。