一种放射学深度转移学习框架,适用于从胸部x光片估计肺部混浊。

Avantika Vardhan, Alex Makhnevich, Pravan Omprakash, David Hirschorn, Matthew Barish, Stuart L Cohen, Theodoros P Zanos
{"title":"一种放射学深度转移学习框架,适用于从胸部x光片估计肺部混浊。","authors":"Avantika Vardhan,&nbsp;Alex Makhnevich,&nbsp;Pravan Omprakash,&nbsp;David Hirschorn,&nbsp;Matthew Barish,&nbsp;Stuart L Cohen,&nbsp;Theodoros P Zanos","doi":"10.1186/s42234-022-00103-0","DOIUrl":null,"url":null,"abstract":"<p><p>Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.</p>","PeriodicalId":72363,"journal":{"name":"Bioelectronic medicine","volume":"9 1","pages":"1"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9809517/pdf/","citationCount":"3","resultStr":"{\"title\":\"A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays.\",\"authors\":\"Avantika Vardhan,&nbsp;Alex Makhnevich,&nbsp;Pravan Omprakash,&nbsp;David Hirschorn,&nbsp;Matthew Barish,&nbsp;Stuart L Cohen,&nbsp;Theodoros P Zanos\",\"doi\":\"10.1186/s42234-022-00103-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.</p>\",\"PeriodicalId\":72363,\"journal\":{\"name\":\"Bioelectronic medicine\",\"volume\":\"9 1\",\"pages\":\"1\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9809517/pdf/\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bioelectronic medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s42234-022-00103-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioelectronic medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s42234-022-00103-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

胸片(CXRs)是最广泛使用的放射成像方式,用于检测导致肺部混浊的呼吸系统疾病。CXR报告经常使用非标准化的语言,导致主观的、定性的和不可重复的不透明度估计。我们的目标是开发一个强大的深度迁移学习框架,并使其适应于从cxr中估计肺不透明的程度。在基于排除标准的CXR数据选择之后,将分割方案用于ROI(感兴趣区域)提取,并对所有分割、数据平衡和分类方法的组合进行测试,以选择性能最好的模型。基于适当的性能指标以及新颖的宏观平均热图一致性评分(MA HCS),使用多重交叉验证从最初选择的顶级模型中确定最佳模型。将最佳模型的性能与专家医师注释器的性能进行比较,并生成热图。最后,对感兴趣的患者群体进行模型性能敏感性分析。提出的框架适用于使用顺序多类分类估计CXR肺混浊程度的具体用例。在2020年3月24日至2020年5月22日期间获得,使用了来自17,418名患者的38,365份前瞻性注释的cxr。我们测试了三种神经网络架构(ResNet-50、VGG-16和ChexNet)、三种分割方案(无分割、肺分割和基于脊柱检测的横向分割)和三种数据平衡策略(欠采样、双阶段采样和合成少数过采样),使用38,079张CXR图像进行训练,并将286张图像作为开箱使用的数据集进行验证,并经过放射科专家评审。基于这些实验的结果,基于MAE指标和HCS(热图一致性评分)的最优值,推荐使用欠采样和无ROI分割的ResNet-50模型进行肺不透明分类。在性能指标方面,该模型预测的不透明度评分与两组放射科医生评分(OR或原始阅读器和OOBTR或开箱阅读器)之间的一致性程度优于放射科医生之间的不透明度评分一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays.

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays.

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays.

A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays.

Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.90
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信