Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Semisupervised Learning with Report-guided Pseudo Labels for Deep Learning-based Prostate Cancer Detection Using Biparametric MRI. 使用双参数MRI进行基于深度学习的前列腺癌症检测的报告引导伪标签半监督学习。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-26 eCollection Date: 2023-09-01 DOI: 10.1148/ryai.230031
Joeran S Bosma, Anindo Saha, Matin Hosseinzadeh, Ivan Slootweg, Maarten de Rooij, Henkjan Huisman
{"title":"Semisupervised Learning with Report-guided Pseudo Labels for Deep Learning-based Prostate Cancer Detection Using Biparametric MRI.","authors":"Joeran S Bosma,&nbsp;Anindo Saha,&nbsp;Matin Hosseinzadeh,&nbsp;Ivan Slootweg,&nbsp;Maarten de Rooij,&nbsp;Henkjan Huisman","doi":"10.1148/ryai.230031","DOIUrl":"https://doi.org/10.1148/ryai.230031","url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate a novel method of semisupervised learning (SSL) guided by automated sparse information from diagnostic reports to leverage additional data for deep learning-based malignancy detection in patients with clinically significant prostate cancer.</p><p><strong>Materials and methods: </strong>This retrospective study included 7756 prostate MRI examinations (6380 patients) performed between January 2014 and December 2020 for model development. An SSL method, report-guided SSL (RG-SSL), was developed for detection of clinically significant prostate cancer using biparametric MRI. RG-SSL, supervised learning (SL), and state-of-the-art SSL methods were trained using 100, 300, 1000, or 3050 manually annotated examinations. Performance on detection of clinically significant prostate cancer by RG-SSL, SL, and SSL was compared on 300 unseen examinations from an external center with a histopathologically confirmed reference standard. Performance was evaluated using receiver operating characteristic (ROC) and free-response ROC analysis. <i>P</i> values for performance differences were generated with a permutation test.</p><p><strong>Results: </strong>At 100 manually annotated examinations, mean examination-based diagnostic area under the ROC curve (AUC) values for RG-SSL, SL, and the best SSL were 0.86 ± 0.01 (SD), 0.78 ± 0.03, and 0.81 ± 0.02, respectively. Lesion-based detection partial AUCs were 0.62 ± 0.02, 0.44 ± 0.04, and 0.48 ± 0.09, respectively. Examination-based performance of SL with 3050 examinations was matched by RG-SSL with 169 manually annotated examinations, thus requiring 14 times fewer annotations. Lesion-based performance was matched with 431 manually annotated examinations, requiring six times fewer annotations.</p><p><strong>Conclusion: </strong>RG-SSL outperformed SSL in clinically significant prostate cancer detection and achieved performance similar to SL even at very low annotation budgets.<b>Keywords:</b> Annotation Efficiency, Computer-aided Detection and Diagnosis, MRI, Prostate Cancer, Semisupervised Deep Learning <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10546362/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41167809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
External Evaluation of a Mammography-based Deep Learning Model for Predicting Breast Cancer in an Ethnically Diverse Population. 基于乳房x线摄影的深度学习模型在不同种族人群中预测乳腺癌的外部评估
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-26 eCollection Date: 2023-11-01 DOI: 10.1148/ryai.220299
Olasubomi J Omoleye, Anna E Woodard, Frederick M Howard, Fangyuan Zhao, Toshio F Yoshimatsu, Yonglan Zheng, Alexander T Pearson, Maksim Levental, Benjamin S Aribisala, Kirti Kulkarni, Gregory S Karczmar, Olufunmilayo I Olopade, Hiroyuki Abe, Dezheng Huo
{"title":"External Evaluation of a Mammography-based Deep Learning Model for Predicting Breast Cancer in an Ethnically Diverse Population.","authors":"Olasubomi J Omoleye, Anna E Woodard, Frederick M Howard, Fangyuan Zhao, Toshio F Yoshimatsu, Yonglan Zheng, Alexander T Pearson, Maksim Levental, Benjamin S Aribisala, Kirti Kulkarni, Gregory S Karczmar, Olufunmilayo I Olopade, Hiroyuki Abe, Dezheng Huo","doi":"10.1148/ryai.220299","DOIUrl":"10.1148/ryai.220299","url":null,"abstract":"<p><strong>Purpose: </strong>To externally evaluate a mammography-based deep learning (DL) model (Mirai) in a high-risk racially diverse population and compare its performance with other mammographic measures.</p><p><strong>Materials and methods: </strong>A total of 6435 screening mammograms in 2096 female patients (median age, 56.4 years ± 11.2 [SD]) enrolled in a hospital-based case-control study from 2006 to 2020 were retrospectively evaluated. Pathologically confirmed breast cancer was the primary outcome. Mirai scores were the primary predictors. Breast density and Breast Imaging Reporting and Data System (BI-RADS) assessment categories were comparative predictors. Performance was evaluated using area under the receiver operating characteristic curve (AUC) and concordance index analyses.</p><p><strong>Results: </strong>Mirai achieved 1- and 5-year AUCs of 0.71 (95% CI: 0.68, 0.74) and 0.65 (95% CI: 0.64, 0.67), respectively. One-year AUCs for nondense versus dense breasts were 0.72 versus 0.58 (<i>P</i> = .10). There was no evidence of a difference in near-term discrimination performance between BI-RADS and Mirai (1-year AUC, 0.73 vs 0.68; <i>P</i> = .34). For longer-term prediction (2-5 years), Mirai outperformed BI-RADS assessment (5-year AUC, 0.63 vs 0.54; <i>P</i> < .001). Using only images of the unaffected breast reduced the discriminatory performance of the DL model (<i>P</i> < .001 at all time points), suggesting that its predictions are likely dependent on the detection of ipsilateral premalignant patterns.</p><p><strong>Conclusion: </strong>A mammography DL model showed good performance in a high-risk external dataset enriched for African American patients, benign breast disease, and <i>BRCA</i> mutation carriers, and study findings suggest that the model performance is likely driven by the detection of precancerous changes.<b>Keywords:</b> Breast, Cancer, Computer Applications, Convolutional Neural Network, Deep Learning Algorithms, Informatics, Epidemiology, Machine Learning, Mammography, Oncology, Radiomics <i>Supplemental material is available for this article</i>. © RSNA, 2023See also commentary by Kontos and Kalpathy-Cramer in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10698602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41265359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-assisted Diagnosis of Breast Lesions on US Images: A Multivendor, Multicenter Study. 美国图像上乳腺病变的深度学习辅助诊断:一项多供应商、多中心研究。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-12 eCollection Date: 2023-09-01 DOI: 10.1148/ryai.220185
Huiling Xiang, Xi Wang, Min Xu, Yuhua Zhang, Shue Zeng, Chunyan Li, Lixian Liu, Tingting Deng, Guoxue Tang, Cuiju Yan, Jinjing Ou, Qingguang Lin, Jiehua He, Peng Sun, Anhua Li, Hao Chen, Pheng-Ann Heng, Xi Lin
{"title":"Deep Learning-assisted Diagnosis of Breast Lesions on US Images: A Multivendor, Multicenter Study.","authors":"Huiling Xiang,&nbsp;Xi Wang,&nbsp;Min Xu,&nbsp;Yuhua Zhang,&nbsp;Shue Zeng,&nbsp;Chunyan Li,&nbsp;Lixian Liu,&nbsp;Tingting Deng,&nbsp;Guoxue Tang,&nbsp;Cuiju Yan,&nbsp;Jinjing Ou,&nbsp;Qingguang Lin,&nbsp;Jiehua He,&nbsp;Peng Sun,&nbsp;Anhua Li,&nbsp;Hao Chen,&nbsp;Pheng-Ann Heng,&nbsp;Xi Lin","doi":"10.1148/ryai.220185","DOIUrl":"https://doi.org/10.1148/ryai.220185","url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate the diagnostic performance of a deep learning (DL) model for breast US across four hospitals and assess its value to readers with different levels of experience.</p><p><strong>Materials and methods: </strong>In this retrospective study, a dual attention-based convolutional neural network was built and validated to discriminate malignant tumors from benign tumors by using B-mode and color Doppler US images (<i>n</i> = 45 909, March 2011-August 2018), acquired with 42 types of US machines, of 9895 pathologic analysis-confirmed breast lesions in 8797 patients (27 men and 8770 women; mean age, 47 years ± 12 [SD]). With and without assistance from the DL model, three novice readers with less than 5 years of US experience and two experienced readers with 8 and 18 years of US experience, respectively, interpreted 1024 randomly selected lesions. Differences in the areas under the receiver operating characteristic curves (AUCs) were tested using the DeLong test.</p><p><strong>Results: </strong>The DL model using both B-mode and color Doppler US images demonstrated expert-level performance at the lesion level, with an AUC of 0.94 (95% CI: 0.92, 0.95) for the internal set. In external datasets, the AUCs were 0.92 (95% CI: 0.90, 0.94) for hospital 1, 0.91 (95% CI: 0.89, 0.94) for hospital 2, and 0.96 (95% CI: 0.94, 0.98) for hospital 3. DL assistance led to improved AUCs (<i>P</i> < .001) for one experienced and three novice radiologists and improved interobserver agreement. The average false-positive rate was reduced by 7.6% (<i>P</i> = .08).</p><p><strong>Conclusion: </strong>The DL model may help radiologists, especially novice readers, improve accuracy and interobserver agreement of breast tumor diagnosis using US.<b>Keywords:</b> Ultrasound, Breast, Diagnosis, Breast Cancer, Deep Learning, Ultrasonography <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10546363/pdf/ryai.220185.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41136111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Subgroup Imperative: Chest Radiograph Classifier Generalization Gaps in Patient, Setting, and Pathology Subgroups. 要求分组:胸部射线照片分类器患者、环境和病理学分组中的通用差距。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-12 eCollection Date: 2023-09-01 DOI: 10.1148/ryai.220270
Monish Ahluwalia, Mohamed Abdalla, James Sanayei, Laleh Seyyed-Kalantari, Mohannad Hussain, Amna Ali, Benjamin Fine
{"title":"The Subgroup Imperative: Chest Radiograph Classifier Generalization Gaps in Patient, Setting, and Pathology Subgroups.","authors":"Monish Ahluwalia,&nbsp;Mohamed Abdalla,&nbsp;James Sanayei,&nbsp;Laleh Seyyed-Kalantari,&nbsp;Mohannad Hussain,&nbsp;Amna Ali,&nbsp;Benjamin Fine","doi":"10.1148/ryai.220270","DOIUrl":"10.1148/ryai.220270","url":null,"abstract":"<p><strong>Purpose: </strong>To externally test four chest radiograph classifiers on a large, diverse, real-world dataset with robust subgroup analysis.</p><p><strong>Materials and methods: </strong>In this retrospective study, adult posteroanterior chest radiographs (January 2016-December 2020) and associated radiology reports from Trillium Health Partners in Ontario, Canada, were extracted and de-identified. An open-source natural language processing tool was locally validated and used to generate ground truth labels for the 197 540-image dataset based on the associated radiology report. Four classifiers generated predictions on each chest radiograph. Performance was evaluated using accuracy, positive predictive value, negative predictive value, sensitivity, specificity, F1 score, and Matthews correlation coefficient for the overall dataset and for patient, setting, and pathology subgroups.</p><p><strong>Results: </strong>Classifiers demonstrated 68%-77% accuracy, 64%-75% sensitivity, and 82%-94% specificity on the external testing dataset. Algorithms showed decreased sensitivity for solitary findings (43%-65%), patients younger than 40 years (27%-39%), and patients in the emergency department (38%-60%) and decreased specificity on normal chest radiographs with support devices (59%-85%). Differences in sex and ancestry represented movements along an algorithm's receiver operating characteristic curve.</p><p><strong>Conclusion: </strong>Performance of deep learning chest radiograph classifiers was subject to patient, setting, and pathology factors, demonstrating that subgroup analysis is necessary to inform implementation and monitor ongoing performance to ensure optimal quality, safety, and equity.<b>Keywords:</b> Conventional Radiography, Thorax, Ethics, Supervised Learning, Convolutional Neural Network (CNN), Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023See also the commentary by Huisman and Hannink in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10546359/pdf/ryai.220270.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41138463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. TotalSegmentator:CT图像中104个解剖结构的稳健分割。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-05 eCollection Date: 2023-09-01 DOI: 10.1148/ryai.230024
Jakob Wasserthal, Hanns-Christian Breit, Manfred T Meyer, Maurice Pradella, Daniel Hinck, Alexander W Sauter, Tobias Heye, Daniel T Boll, Joshy Cyriac, Shan Yang, Michael Bach, Martin Segeroth
{"title":"TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images.","authors":"Jakob Wasserthal,&nbsp;Hanns-Christian Breit,&nbsp;Manfred T Meyer,&nbsp;Maurice Pradella,&nbsp;Daniel Hinck,&nbsp;Alexander W Sauter,&nbsp;Tobias Heye,&nbsp;Daniel T Boll,&nbsp;Joshy Cyriac,&nbsp;Shan Yang,&nbsp;Michael Bach,&nbsp;Martin Segeroth","doi":"10.1148/ryai.230024","DOIUrl":"https://doi.org/10.1148/ryai.230024","url":null,"abstract":"<p><strong>Purpose: </strong>To present a deep learning segmentation model that can automatically and robustly segment all major anatomic structures on body CT images.</p><p><strong>Materials and methods: </strong>In this retrospective study, 1204 CT examinations (from 2012, 2016, and 2020) were used to segment 104 anatomic structures (27 organs, 59 bones, 10 muscles, and eight vessels) relevant for use cases such as organ volumetry, disease characterization, and surgical or radiation therapy planning. The CT images were randomly sampled from routine clinical studies and thus represent a real-world dataset (different ages, abnormalities, scanners, body parts, sequences, and sites). The authors trained an nnU-Net segmentation algorithm on this dataset and calculated Dice similarity coefficients to evaluate the model's performance. The trained algorithm was applied to a second dataset of 4004 whole-body CT examinations to investigate age-dependent volume and attenuation changes.</p><p><strong>Results: </strong>The proposed model showed a high Dice score (0.943) on the test set, which included a wide range of clinical data with major abnormalities. The model significantly outperformed another publicly available segmentation model on a separate dataset (Dice score, 0.932 vs 0.871; <i>P</i> < .001). The aging study demonstrated significant correlations between age and volume and mean attenuation for a variety of organ groups (eg, age and aortic volume [<i>r<sub>s</sub></i> = 0.64; <i>P</i> < .001]; age and mean attenuation of the autochthonous dorsal musculature [<i>r<sub>s</sub></i> = -0.74; <i>P</i> < .001]).</p><p><strong>Conclusion: </strong>The developed model enables robust and accurate segmentation of 104 anatomic structures. The annotated dataset (<i>https://doi.org/10.5281/zenodo.6802613</i>) and toolkit (<i>https://www.github.com/wasserth/TotalSegmentator</i>) are publicly available.<b>Keywords:</b> CT, Segmentation, Neural Networks <i>Supplemental material is available for this article</i>. © RSNA, 2023See also commentary by Sebro and Mongan in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10546353/pdf/ryai.230024.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41136825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. 医学影像人工智能交叉验证指南
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-01 DOI: 10.1148/ryai.220232
Tyler J Bradshaw, Zachary Huemann, Junjie Hu, Arman Rahmim
{"title":"A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging.","authors":"Tyler J Bradshaw,&nbsp;Zachary Huemann,&nbsp;Junjie Hu,&nbsp;Arman Rahmim","doi":"10.1148/ryai.220232","DOIUrl":"https://doi.org/10.1148/ryai.220232","url":null,"abstract":"<p><p>Artificial intelligence (AI) is being increasingly used to automate and improve technologies within the field of medical imaging. A critical step in the development of an AI algorithm is estimating its prediction error through cross-validation (CV). The use of CV can help prevent overoptimism in AI algorithms and can mitigate certain biases associated with hyperparameter tuning and algorithm selection. This article introduces the principles of CV and provides a practical guide on the use of CV for AI algorithm development in medical imaging. Different CV techniques are described, as well as their advantages and disadvantages under different scenarios. Common pitfalls in prediction error estimation and guidance on how to avoid them are also discussed. <b>Keywords:</b> Education, Research Design, Technical Aspects, Statistics, Supervised Learning, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article</i>. © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10388213/pdf/ryai.220232.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10301355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Conquering the Cobb Angle: A Deep Learning Algorithm for Automated, Hardware-Invariant Measurement of Cobb Angle on Radiographs in Patients with Scoliosis. 征服科布角:用于脊柱侧凸患者x线片上科布角自动、硬件不变测量的深度学习算法。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-07-01 DOI: 10.1148/ryai.220158
Abhinav Suri, Sisi Tang, Daniel Kargilis, Elena Taratuta, Bruce J Kneeland, Grace Choi, Alisha Agarwal, Nancy Anabaraonye, Winnie Xu, James B Parente, Ashley Terry, Anita Kalluri, Katie Song, Chamith S Rajapakse
{"title":"Conquering the Cobb Angle: A Deep Learning Algorithm for Automated, Hardware-Invariant Measurement of Cobb Angle on Radiographs in Patients with Scoliosis.","authors":"Abhinav Suri,&nbsp;Sisi Tang,&nbsp;Daniel Kargilis,&nbsp;Elena Taratuta,&nbsp;Bruce J Kneeland,&nbsp;Grace Choi,&nbsp;Alisha Agarwal,&nbsp;Nancy Anabaraonye,&nbsp;Winnie Xu,&nbsp;James B Parente,&nbsp;Ashley Terry,&nbsp;Anita Kalluri,&nbsp;Katie Song,&nbsp;Chamith S Rajapakse","doi":"10.1148/ryai.220158","DOIUrl":"https://doi.org/10.1148/ryai.220158","url":null,"abstract":"<p><p>Scoliosis is a disease estimated to affect more than 8% of adults in the United States. It is diagnosed with use of radiography by means of manual measurement of the angle between maximally tilted vertebrae on a radiograph (ie, the Cobb angle). However, these measurements are time-consuming, limiting their use in scoliosis surgical planning and postoperative monitoring. In this retrospective study, a pipeline (using the SpineTK architecture) was developed that was trained, validated, and tested on 1310 anterior-posterior images obtained with a low-dose stereoradiographic scanning system and radiographs obtained in patients with suspected scoliosis to automatically measure Cobb angles. The images were obtained at six centers (2005-2020). The algorithm measured Cobb angles on hold-out internal (<i>n</i> = 460) and external (<i>n</i> = 161) test sets with less than 2° error (intraclass correlation coefficient, 0.96) compared with ground truth measurements by two experienced radiologists. Measurements, produced in less than 0.5 second, did not differ significantly (<i>P</i> = .05 cutoff) from ground truth measurements, regardless of the presence or absence of surgical hardware (<i>P</i> = .80), age (<i>P</i> = .58), sex (<i>P</i> = .83), body mass index (<i>P</i> = .63), scoliosis severity (<i>P</i> = .44), or image type (low-dose stereoradiographic image vs radiograph; <i>P</i> = .51) in the patient. These findings suggest that the algorithm is highly robust across different clinical characteristics. Given its automated, rapid, and accurate measurements, this network may be used for monitoring scoliosis progression in patients. <b>Keywords:</b> Cobb Angle, Convolutional Neural Network, Deep Learning Algorithms, Pediatrics, Machine Learning Algorithms, Scoliosis, Spine <i>Supplemental material is available for this article</i>. © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10388214/pdf/ryai.220158.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10301364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Reporting Guidelines: How to Select the Best One for Your Research. 人工智能报告指南:如何为你的研究选择最好的。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-05-01 DOI: 10.1148/ryai.230055
Michail E Klontzas, Anthony A Gatti, Ali S Tejani, Charles E Kahn
{"title":"AI Reporting Guidelines: How to Select the Best One for Your Research.","authors":"Michail E Klontzas,&nbsp;Anthony A Gatti,&nbsp;Ali S Tejani,&nbsp;Charles E Kahn","doi":"10.1148/ryai.230055","DOIUrl":"https://doi.org/10.1148/ryai.230055","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245184/pdf/ryai.230055.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9663785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Augmentation of the RSNA Pulmonary Embolism CT Dataset with Bounding Box Annotations and Anatomic Localization of Pulmonary Emboli. 基于边界框注释的RSNA肺栓塞CT数据集增强及肺栓塞的解剖定位。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-05-01 DOI: 10.1148/ryai.230001
Matias F Callejas, Hui Ming Lin, Thomas Howard, Matthew Aitken, Marc Napoleone, Laura Jimenez-Juan, Robert Moreland, Shobhit Mathur, Djeven P Deva, Errol Colak
{"title":"Augmentation of the RSNA Pulmonary Embolism CT Dataset with Bounding Box Annotations and Anatomic Localization of Pulmonary Emboli.","authors":"Matias F Callejas,&nbsp;Hui Ming Lin,&nbsp;Thomas Howard,&nbsp;Matthew Aitken,&nbsp;Marc Napoleone,&nbsp;Laura Jimenez-Juan,&nbsp;Robert Moreland,&nbsp;Shobhit Mathur,&nbsp;Djeven P Deva,&nbsp;Errol Colak","doi":"10.1148/ryai.230001","DOIUrl":"https://doi.org/10.1148/ryai.230001","url":null,"abstract":"<p><p><i>Supplemental material is available for this article</i>. <b>Keywords:</b> CT, Pulmonary Arteries, Embolism/Thrombosis, Feature Detection © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245177/pdf/ryai.230001.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9965027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images. 基于变压器的深度神经网络在数字乳腺断层合成图像上的乳腺癌分类。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2023-05-01 DOI: 10.1148/ryai.220159
Weonsuk Lee, Hyeonsoo Lee, Hyunjae Lee, Eun Kyung Park, Hyeonseob Nam, Thijs Kooi
{"title":"Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images.","authors":"Weonsuk Lee,&nbsp;Hyeonsoo Lee,&nbsp;Hyunjae Lee,&nbsp;Eun Kyung Park,&nbsp;Hyeonseob Nam,&nbsp;Thijs Kooi","doi":"10.1148/ryai.220159","DOIUrl":"https://doi.org/10.1148/ryai.220159","url":null,"abstract":"<p><strong>Purpose: </strong>To develop an efficient deep neural network model that incorporates context from neighboring image sections to detect breast cancer on digital breast tomosynthesis (DBT) images.</p><p><strong>Materials and methods: </strong>The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 four-view DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity.</p><p><strong>Results: </strong>On the test set of 655 DBT studies, both 3D models showed higher classification performance than did the per-section baseline model. The proposed transformer-based model showed a significant increase in AUC (0.88 vs 0.91, <i>P</i> = .002), sensitivity (81.0% vs 87.7%, <i>P</i> = .006), and specificity (80.5% vs 86.4%, <i>P</i> < .001) at clinically relevant operating points when compared with the single-DBT-section baseline. The transformer-based model used only 25% of the number of floating-point operations per second used by the 3D convolution model while demonstrating similar classification performance.</p><p><strong>Conclusion: </strong>A transformer-based deep neural network using data from neighboring sections improved breast cancer classification performance compared with a per-section baseline model and was more efficient than a model using 3D convolutions.<b>Keywords:</b> Breast, Tomosynthesis, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Digital Breast Tomosynthesis, Breast Cancer, Deep Neural Networks, Transformers <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245183/pdf/ryai.220159.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9982465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信