Yasuhito Mitsuyama, Hirotaka Takita, Shannon L Walston, Ko Watanabe, Shoya Ishimaru, Yukio Miki, Daiju Ueda
{"title":"x线摄影身体部位分类和胸片投影/方向分类的深度学习模型:一项多机构研究。","authors":"Yasuhito Mitsuyama, Hirotaka Takita, Shannon L Walston, Ko Watanabe, Shoya Ishimaru, Yukio Miki, Daiju Ueda","doi":"10.1007/s00330-025-12053-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large-scale radiographic datasets often include errors in labels such as body parts or projection, which can undermine automated image analysis. Therefore, we aimed to develop and externally validate two deep-learning models-one for categorising radiographs by body part, and another for identifying projection and rotation of chest radiographs-using large, diverse datasets.</p><p><strong>Materials and methods: </strong>We retrospectively collected radiographs from multiple institutions and public repositories. For the first model (Xp-Bodypart-Checker), we included seven categories (Head, Neck, Chest, Incomplete Chest, Abdomen, Pelvis, Extremities). For the second model (CXp-Projection-Rotation-Checker), we classified chest radiographs by projection (anterior-posterior, posterior-anterior, lateral) and rotation (upright, inverted, left rotation, right rotation). Both models were trained, tuned, and internally tested on separate data, then externally tested on radiographs from different institutions. Model performance was assessed using overall accuracy (micro, macro, and weighted) as well as one-vs.-all area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>In the Xp-Bodypart-Checker development phase, we included 429,341 radiographs obtained from Institutions A, B, and MURA. In the CXp-Projection-Rotation-Checker development phase, we included 463,728 chest radiographs from CheXpert, PadChest, and Institution A. The Xp-Bodypart-Checker achieved AUC values of 1.00 (99% CI: 1.00-1.00) for all classes other than Incomplete Chest, which had an AUC value of 0.99 (99% CI: 0.98-1.00). The CXp-Projection-Rotation-Checker demonstrated AUC values of 1.00 (99% CI: 1.00-1.00) across all projection and rotation classifications.</p><p><strong>Conclusion: </strong>These models help automatically verify image labels in large radiographic databases, improving quality control across multiple institutions.</p><p><strong>Key points: </strong>Question This study examines how deep learning can accurately classify radiograph body parts and detect chest projection/orientation in large, multi-institutional datasets, enhancing metadata consistency for clinical and research workflows. Findings Xp-Bodypart-Checker classified radiographs into seven categories with AUC values of over 0.99 for all classes, while CXp-Projection-Rotation-Checker achieved AUC values of 1.00 across all projection and rotation classifications. Clinical relevance Trained on over 860,000 multi-institutional radiographs, our two deep-learning models classify radiograph body-part and chest radiograph projection/rotation, identifying mislabeled data and enhancing data integrity, thereby improving reliability for both clinical use and deep-learning research.</p>","PeriodicalId":12076,"journal":{"name":"European Radiology","volume":" ","pages":""},"PeriodicalIF":4.7000,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning models for radiography body-part classification and chest radiograph projection/orientation classification: a multi-institutional study.\",\"authors\":\"Yasuhito Mitsuyama, Hirotaka Takita, Shannon L Walston, Ko Watanabe, Shoya Ishimaru, Yukio Miki, Daiju Ueda\",\"doi\":\"10.1007/s00330-025-12053-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Large-scale radiographic datasets often include errors in labels such as body parts or projection, which can undermine automated image analysis. Therefore, we aimed to develop and externally validate two deep-learning models-one for categorising radiographs by body part, and another for identifying projection and rotation of chest radiographs-using large, diverse datasets.</p><p><strong>Materials and methods: </strong>We retrospectively collected radiographs from multiple institutions and public repositories. For the first model (Xp-Bodypart-Checker), we included seven categories (Head, Neck, Chest, Incomplete Chest, Abdomen, Pelvis, Extremities). For the second model (CXp-Projection-Rotation-Checker), we classified chest radiographs by projection (anterior-posterior, posterior-anterior, lateral) and rotation (upright, inverted, left rotation, right rotation). Both models were trained, tuned, and internally tested on separate data, then externally tested on radiographs from different institutions. Model performance was assessed using overall accuracy (micro, macro, and weighted) as well as one-vs.-all area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>In the Xp-Bodypart-Checker development phase, we included 429,341 radiographs obtained from Institutions A, B, and MURA. In the CXp-Projection-Rotation-Checker development phase, we included 463,728 chest radiographs from CheXpert, PadChest, and Institution A. The Xp-Bodypart-Checker achieved AUC values of 1.00 (99% CI: 1.00-1.00) for all classes other than Incomplete Chest, which had an AUC value of 0.99 (99% CI: 0.98-1.00). The CXp-Projection-Rotation-Checker demonstrated AUC values of 1.00 (99% CI: 1.00-1.00) across all projection and rotation classifications.</p><p><strong>Conclusion: </strong>These models help automatically verify image labels in large radiographic databases, improving quality control across multiple institutions.</p><p><strong>Key points: </strong>Question This study examines how deep learning can accurately classify radiograph body parts and detect chest projection/orientation in large, multi-institutional datasets, enhancing metadata consistency for clinical and research workflows. Findings Xp-Bodypart-Checker classified radiographs into seven categories with AUC values of over 0.99 for all classes, while CXp-Projection-Rotation-Checker achieved AUC values of 1.00 across all projection and rotation classifications. Clinical relevance Trained on over 860,000 multi-institutional radiographs, our two deep-learning models classify radiograph body-part and chest radiograph projection/rotation, identifying mislabeled data and enhancing data integrity, thereby improving reliability for both clinical use and deep-learning research.</p>\",\"PeriodicalId\":12076,\"journal\":{\"name\":\"European Radiology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2025-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00330-025-12053-7\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00330-025-12053-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Deep learning models for radiography body-part classification and chest radiograph projection/orientation classification: a multi-institutional study.
Objectives: Large-scale radiographic datasets often include errors in labels such as body parts or projection, which can undermine automated image analysis. Therefore, we aimed to develop and externally validate two deep-learning models-one for categorising radiographs by body part, and another for identifying projection and rotation of chest radiographs-using large, diverse datasets.
Materials and methods: We retrospectively collected radiographs from multiple institutions and public repositories. For the first model (Xp-Bodypart-Checker), we included seven categories (Head, Neck, Chest, Incomplete Chest, Abdomen, Pelvis, Extremities). For the second model (CXp-Projection-Rotation-Checker), we classified chest radiographs by projection (anterior-posterior, posterior-anterior, lateral) and rotation (upright, inverted, left rotation, right rotation). Both models were trained, tuned, and internally tested on separate data, then externally tested on radiographs from different institutions. Model performance was assessed using overall accuracy (micro, macro, and weighted) as well as one-vs.-all area under the receiver operating characteristic curve (AUC).
Results: In the Xp-Bodypart-Checker development phase, we included 429,341 radiographs obtained from Institutions A, B, and MURA. In the CXp-Projection-Rotation-Checker development phase, we included 463,728 chest radiographs from CheXpert, PadChest, and Institution A. The Xp-Bodypart-Checker achieved AUC values of 1.00 (99% CI: 1.00-1.00) for all classes other than Incomplete Chest, which had an AUC value of 0.99 (99% CI: 0.98-1.00). The CXp-Projection-Rotation-Checker demonstrated AUC values of 1.00 (99% CI: 1.00-1.00) across all projection and rotation classifications.
Conclusion: These models help automatically verify image labels in large radiographic databases, improving quality control across multiple institutions.
Key points: Question This study examines how deep learning can accurately classify radiograph body parts and detect chest projection/orientation in large, multi-institutional datasets, enhancing metadata consistency for clinical and research workflows. Findings Xp-Bodypart-Checker classified radiographs into seven categories with AUC values of over 0.99 for all classes, while CXp-Projection-Rotation-Checker achieved AUC values of 1.00 across all projection and rotation classifications. Clinical relevance Trained on over 860,000 multi-institutional radiographs, our two deep-learning models classify radiograph body-part and chest radiograph projection/rotation, identifying mislabeled data and enhancing data integrity, thereby improving reliability for both clinical use and deep-learning research.
期刊介绍:
European Radiology (ER) continuously updates scientific knowledge in radiology by publication of strong original articles and state-of-the-art reviews written by leading radiologists. A well balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes ER an indispensable source for current information in this field.
This is the Journal of the European Society of Radiology, and the official journal of a number of societies.
From 2004-2008 supplements to European Radiology were published under its companion, European Radiology Supplements, ISSN 1613-3749.