Deep learning models for radiography body-part classification and chest radiograph projection/orientation classification: a multi-institutional study.

IF 4.7 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Yasuhito Mitsuyama, Hirotaka Takita, Shannon L Walston, Ko Watanabe, Shoya Ishimaru, Yukio Miki, Daiju Ueda
{"title":"Deep learning models for radiography body-part classification and chest radiograph projection/orientation classification: a multi-institutional study.","authors":"Yasuhito Mitsuyama, Hirotaka Takita, Shannon L Walston, Ko Watanabe, Shoya Ishimaru, Yukio Miki, Daiju Ueda","doi":"10.1007/s00330-025-12053-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large-scale radiographic datasets often include errors in labels such as body parts or projection, which can undermine automated image analysis. Therefore, we aimed to develop and externally validate two deep-learning models-one for categorising radiographs by body part, and another for identifying projection and rotation of chest radiographs-using large, diverse datasets.</p><p><strong>Materials and methods: </strong>We retrospectively collected radiographs from multiple institutions and public repositories. For the first model (Xp-Bodypart-Checker), we included seven categories (Head, Neck, Chest, Incomplete Chest, Abdomen, Pelvis, Extremities). For the second model (CXp-Projection-Rotation-Checker), we classified chest radiographs by projection (anterior-posterior, posterior-anterior, lateral) and rotation (upright, inverted, left rotation, right rotation). Both models were trained, tuned, and internally tested on separate data, then externally tested on radiographs from different institutions. Model performance was assessed using overall accuracy (micro, macro, and weighted) as well as one-vs.-all area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>In the Xp-Bodypart-Checker development phase, we included 429,341 radiographs obtained from Institutions A, B, and MURA. In the CXp-Projection-Rotation-Checker development phase, we included 463,728 chest radiographs from CheXpert, PadChest, and Institution A. The Xp-Bodypart-Checker achieved AUC values of 1.00 (99% CI: 1.00-1.00) for all classes other than Incomplete Chest, which had an AUC value of 0.99 (99% CI: 0.98-1.00). The CXp-Projection-Rotation-Checker demonstrated AUC values of 1.00 (99% CI: 1.00-1.00) across all projection and rotation classifications.</p><p><strong>Conclusion: </strong>These models help automatically verify image labels in large radiographic databases, improving quality control across multiple institutions.</p><p><strong>Key points: </strong>Question This study examines how deep learning can accurately classify radiograph body parts and detect chest projection/orientation in large, multi-institutional datasets, enhancing metadata consistency for clinical and research workflows. Findings Xp-Bodypart-Checker classified radiographs into seven categories with AUC values of over 0.99 for all classes, while CXp-Projection-Rotation-Checker achieved AUC values of 1.00 across all projection and rotation classifications. Clinical relevance Trained on over 860,000 multi-institutional radiographs, our two deep-learning models classify radiograph body-part and chest radiograph projection/rotation, identifying mislabeled data and enhancing data integrity, thereby improving reliability for both clinical use and deep-learning research.</p>","PeriodicalId":12076,"journal":{"name":"European Radiology","volume":" ","pages":""},"PeriodicalIF":4.7000,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00330-025-12053-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: Large-scale radiographic datasets often include errors in labels such as body parts or projection, which can undermine automated image analysis. Therefore, we aimed to develop and externally validate two deep-learning models-one for categorising radiographs by body part, and another for identifying projection and rotation of chest radiographs-using large, diverse datasets.

Materials and methods: We retrospectively collected radiographs from multiple institutions and public repositories. For the first model (Xp-Bodypart-Checker), we included seven categories (Head, Neck, Chest, Incomplete Chest, Abdomen, Pelvis, Extremities). For the second model (CXp-Projection-Rotation-Checker), we classified chest radiographs by projection (anterior-posterior, posterior-anterior, lateral) and rotation (upright, inverted, left rotation, right rotation). Both models were trained, tuned, and internally tested on separate data, then externally tested on radiographs from different institutions. Model performance was assessed using overall accuracy (micro, macro, and weighted) as well as one-vs.-all area under the receiver operating characteristic curve (AUC).

Results: In the Xp-Bodypart-Checker development phase, we included 429,341 radiographs obtained from Institutions A, B, and MURA. In the CXp-Projection-Rotation-Checker development phase, we included 463,728 chest radiographs from CheXpert, PadChest, and Institution A. The Xp-Bodypart-Checker achieved AUC values of 1.00 (99% CI: 1.00-1.00) for all classes other than Incomplete Chest, which had an AUC value of 0.99 (99% CI: 0.98-1.00). The CXp-Projection-Rotation-Checker demonstrated AUC values of 1.00 (99% CI: 1.00-1.00) across all projection and rotation classifications.

Conclusion: These models help automatically verify image labels in large radiographic databases, improving quality control across multiple institutions.

Key points: Question This study examines how deep learning can accurately classify radiograph body parts and detect chest projection/orientation in large, multi-institutional datasets, enhancing metadata consistency for clinical and research workflows. Findings Xp-Bodypart-Checker classified radiographs into seven categories with AUC values of over 0.99 for all classes, while CXp-Projection-Rotation-Checker achieved AUC values of 1.00 across all projection and rotation classifications. Clinical relevance Trained on over 860,000 multi-institutional radiographs, our two deep-learning models classify radiograph body-part and chest radiograph projection/rotation, identifying mislabeled data and enhancing data integrity, thereby improving reliability for both clinical use and deep-learning research.

x线摄影身体部位分类和胸片投影/方向分类的深度学习模型:一项多机构研究。
目的:大规模放射学数据集通常包括身体部位或投影等标签错误,这可能会破坏自动图像分析。因此,我们的目标是开发和外部验证两个深度学习模型——一个用于根据身体部位对x线片进行分类,另一个用于识别胸部x线片的投影和旋转——使用大型、多样化的数据集。材料和方法:我们回顾性地收集了来自多个机构和公共资料库的x线片。对于第一个模型(Xp-Bodypart-Checker),我们包括七个类别(头部,颈部,胸部,不完整胸部,腹部,骨盆,四肢)。对于第二个模型(exp - projection - rotation - checker),我们根据投影(前后、前后、侧位)和旋转(直立、倒立、左旋、右旋)对胸片进行分类。这两个模型都经过训练、调整,并在单独的数据上进行内部测试,然后在来自不同机构的x光片上进行外部测试。模型性能评估使用整体精度(微观,宏观和加权)以及一对一。-接收器工作特性曲线(AUC)下的所有面积。结果:在Xp-Bodypart-Checker开发阶段,我们纳入了从A、B和MURA机构获得的429,341张x线片。在开发阶段,我们纳入了来自CheXpert、PadChest和Institution a的463,728张胸片。除不完全性胸外,所有类型的xp - body部分- checker的AUC值为1.00 (99% CI: 1.00-1.00),不完全性胸的AUC值为0.99 (99% CI: 0.98-1.00)。exp - projection - rotation - checker在所有投影和旋转分类中显示AUC值为1.00 (99% CI: 1.00-1.00)。结论:这些模型有助于自动验证大型放射数据库中的图像标签,提高多个机构的质量控制。本研究探讨了深度学习如何在大型多机构数据集中准确分类x线片身体部位并检测胸部投影/方向,从而增强临床和研究工作流程的元数据一致性。Xp-Bodypart-Checker将x线片分为7类,所有类别的AUC值均超过0.99,而xp - projection - rotation - checker在所有投影和旋转分类中AUC值均为1.00。我们的两个深度学习模型对超过860,000张多机构x线片进行了训练,对x线片身体部位和胸片投影/旋转进行分类,识别错误标记数据并增强数据完整性,从而提高临床使用和深度学习研究的可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
European Radiology
European Radiology 医学-核医学
CiteScore
11.60
自引率
8.50%
发文量
874
审稿时长
2-4 weeks
期刊介绍: European Radiology (ER) continuously updates scientific knowledge in radiology by publication of strong original articles and state-of-the-art reviews written by leading radiologists. A well balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes ER an indispensable source for current information in this field. This is the Journal of the European Society of Radiology, and the official journal of a number of societies. From 2004-2008 supplements to European Radiology were published under its companion, European Radiology Supplements, ISSN 1613-3749.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信