Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges. 建立专家注释的多机构数据集和举办 RSNA 人工智能挑战赛的经验教训。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230227
Felipe C Kitamura, Luciano M Prevedello, Errol Colak, Safwan S Halabi, Matthew P Lungren, Robyn L Ball, Jayashree Kalpathy-Cramer, Charles E Kahn, Tyler Richards, Jason F Talbott, George Shih, Hui Ming Lin, Katherine P Andriole, Maryam Vazirabad, Bradley J Erickson, Adam E Flanders, John Mongan
{"title":"Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges.","authors":"Felipe C Kitamura, Luciano M Prevedello, Errol Colak, Safwan S Halabi, Matthew P Lungren, Robyn L Ball, Jayashree Kalpathy-Cramer, Charles E Kahn, Tyler Richards, Jason F Talbott, George Shih, Hui Ming Lin, Katherine P Andriole, Maryam Vazirabad, Bradley J Erickson, Adam E Flanders, John Mongan","doi":"10.1148/ryai.230227","DOIUrl":"10.1148/ryai.230227","url":null,"abstract":"<p><p>The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. <b>Keywords:</b> Use of AI in Education, Artificial Intelligence © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When the Student Becomes the Master: Boosting Intracranial Hemorrhage Detection Generalizability with Teacher-Student Learning. 当学生成为主人:通过师生学习提高颅内出血检测的通用性。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.240126
Nathaniel Swinburne
{"title":"When the Student Becomes the Master: Boosting Intracranial Hemorrhage Detection Generalizability with Teacher-Student Learning.","authors":"Nathaniel Swinburne","doi":"10.1148/ryai.240126","DOIUrl":"10.1148/ryai.240126","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of AI for Digital Breast Tomosynthesis on Breast Cancer Detection and Interpretation Time. 数字乳腺断层合成的人工智能对乳腺癌检测和判读时间的影响。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230318
Eun Kyung Park, SooYoung Kwak, Weonsuk Lee, Joon Suk Choi, Thijs Kooi, Eun-Kyung Kim
{"title":"Impact of AI for Digital Breast Tomosynthesis on Breast Cancer Detection and Interpretation Time.","authors":"Eun Kyung Park, SooYoung Kwak, Weonsuk Lee, Joon Suk Choi, Thijs Kooi, Eun-Kyung Kim","doi":"10.1148/ryai.230318","DOIUrl":"10.1148/ryai.230318","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (<i>P</i> = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (<i>P</i> < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (<i>P</i> = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (<i>P</i> < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. <b>Keywords:</b> Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Bae in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy. 基于深度学习的脑干和脑室磁共振平面测量法:在进行性核上性麻痹患者中的应用。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230151
Salvatore Nigro, Marco Filardi, Benedetta Tafuri, Martina Nicolardi, Roberto De Blasi, Alessia Giugno, Valentina Gnoni, Giammarco Milella, Daniele Urso, Stefano Zoccolella, Giancarlo Logroscino
{"title":"Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy.","authors":"Salvatore Nigro, Marco Filardi, Benedetta Tafuri, Martina Nicolardi, Roberto De Blasi, Alessia Giugno, Valentina Gnoni, Giammarco Milella, Daniele Urso, Stefano Zoccolella, Giancarlo Logroscino","doi":"10.1148/ryai.230151","DOIUrl":"10.1148/ryai.230151","url":null,"abstract":"<p><p>Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images in healthy controls (<i>n</i> = 84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncle (MCP), superior cerebellar peduncle (SCP), third ventricle, and frontal horns (FHs). Internal, external, and clinical test datasets (<i>n</i> = 305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, third ventricle, and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain to pons area ratio, MR parkinsonism index (MRPI), and MRPI 2.0, which were used to differentiate patients with PSP (<i>n</i> = 71) from those with Parkinson disease (PD) (<i>n</i> = 129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman ρ > 0.80, <i>P</i> < .001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion The automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. <b>Keywords:</b> MR Imaging, Brain/Brain Stem, Segmentation, Quantification, Diagnosis, Convolutional Neural Network <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Mohajer in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum for: Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer. 勘误:通过癌症中的机器学习识别用于人居计算的精确 3D CT 放射线组学。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.249001
Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez
{"title":"Erratum for: Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer.","authors":"Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez","doi":"10.1148/ryai.249001","DOIUrl":"10.1148/ryai.249001","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Improves Cancer Detection and Reading Time of Digital Breast Tomosynthesis. 人工智能改善了数字乳腺断层扫描的癌症检测和读取时间。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.240219
Min Sun Bae
{"title":"AI Improves Cancer Detection and Reading Time of Digital Breast Tomosynthesis.","authors":"Min Sun Bae","doi":"10.1148/ryai.240219","DOIUrl":"10.1148/ryai.240219","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Robustness of a Deep Learning Bone Age Algorithm to Clinical Image Variation Using Computational Stress Testing. 利用计算压力测试评估深度学习骨龄算法对临床图像变化的鲁棒性。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230240
Samantha M Santomartino, Kristin Putman, Elham Beheshtian, Vishwa S Parekh, Paul H Yi
{"title":"Evaluating the Robustness of a Deep Learning Bone Age Algorithm to Clinical Image Variation Using Computational Stress Testing.","authors":"Samantha M Santomartino, Kristin Putman, Elham Beheshtian, Vishwa S Parekh, Paul H Yi","doi":"10.1148/ryai.230240","DOIUrl":"10.1148/ryai.230240","url":null,"abstract":"<p><p>Purpose To evaluate the robustness of an award-winning bone age deep learning (DL) model to extensive variations in image appearance. Materials and Methods In December 2021, the DL bone age model that won the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated using the RSNA validation set (1425 pediatric hand radiographs; internal test set in this study) and the Digital Hand Atlas (DHA) (1202 pediatric hand radiographs; external test set). Each test image underwent seven types of transformations (rotations, flips, brightness, contrast, inversion, laterality marker, and resolution) to represent a range of image appearances, many of which simulate real-world variations. Computational \"stress tests\" were performed by comparing the model's predictions on baseline and transformed images. Mean absolute differences (MADs) of predicted bone ages compared with radiologist-determined ground truth on baseline versus transformed images were compared using Wilcoxon signed rank tests. The proportion of clinically significant errors (CSEs) was compared using McNemar tests. Results There was no evidence of a difference in MAD of the model on the two baseline test sets (RSNA = 6.8 months, DHA = 6.9 months; <i>P</i> = .05), indicating good model generalization to external data. Except for the RSNA dataset images with an appended radiologic laterality marker (<i>P</i> = .86), there were significant differences in MAD for both the DHA and RSNA datasets among other transformation groups (rotations, flips, brightness, contrast, inversion, and resolution). There were significant differences in proportion of CSEs for 57% of the image transformations (19 of 33) performed on the DHA dataset. Conclusion Although an award-winning pediatric bone age DL model generalized well to curated external images, it had inconsistent predictions on images that had undergone simple transformations reflective of several real-world variations in image appearance. <b>Keywords:</b> Pediatrics, Hand, Convolutional Neural Network, Radiography <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Faghani and Erickson in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan. 肺癌筛查中的辅助人工智能:美国和日本的一项多国回顾性研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230079
Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran
{"title":"Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan.","authors":"Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran","doi":"10.1148/ryai.230079","DOIUrl":"10.1148/ryai.230079","url":null,"abstract":"<p><p>Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; <i>P</i> = .02) for the U.S. study and by 0.023 (0.93 to 0.96; <i>P</i> = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; <i>P</i> < .001) for the U.S. study and 6.7% (23% to 30%; <i>P</i> < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; <i>P</i> = .88) and Japan (98% to 100%; <i>P</i> > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. <b>Keywords:</b> Assistive Artificial Intelligence, Lung Cancer Screening, CT <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. 人工智能辅助分析,帮助检测胸片上的肱骨病变。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230094
Harim Kim, Kyungsu Kim, Seong Je Oh, Sungjoo Lee, Jung Han Woo, Jong Hee Kim, Yoon Ki Cha, Kyunga Kim, Myung Jin Chung
{"title":"AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs.","authors":"Harim Kim, Kyungsu Kim, Seong Je Oh, Sungjoo Lee, Jung Han Woo, Jong Hee Kim, Yoon Ki Cha, Kyunga Kim, Myung Jin Chung","doi":"10.1148/ryai.230094","DOIUrl":"10.1148/ryai.230094","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (<i>n</i> = 13 116) and humeral tumor (<i>n</i> = 1593) cases. The data were divided into training and test groups. A novel training method called <i>false-positive activation area reduction</i> (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, <i>P</i> = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, <i>P</i> < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (<i>P</i> < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. <b>Keywords:</b> Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia. 赞比亚临床肺炎患儿肺部 US 图像的编辑和注释数据集。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230147
Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon
{"title":"Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia.","authors":"Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon","doi":"10.1148/ryai.230147","DOIUrl":"10.1148/ryai.230147","url":null,"abstract":"<p><p>See also the commentary by Sitek in this issue. <i>Supplemental material is available for this article.</i></p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信