Radiology-Artificial Intelligence最新文献

筛选
英文 中文
A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology. 从单室生理学患者心脏磁共振成像注册表中评估心室容积的深度学习管道。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230132
Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu
{"title":"A Deep Learning Pipeline for Assessing Ventricular Volumes from a Cardiac MRI Registry of Patients with Single Ventricle Physiology.","authors":"Tina Yao, Nicole St Clair, Gabriel F Miller, Adam L Dorfman, Mark A Fogel, Sunil Ghelani, Rajesh Krishnamurthy, Christopher Z Lam, Michael Quail, Joshua D Robinson, David Schidlow, Timothy C Slesnick, Justin Weigand, Jennifer A Steeden, Rahul H Rathod, Vivek Muthurangu","doi":"10.1148/ryai.230132","DOIUrl":"10.1148/ryai.230132","url":null,"abstract":"<p><p>Purpose To develop an end-to-end deep learning (DL) pipeline for automated ventricular segmentation of cardiac MRI data from a multicenter registry of patients with Fontan circulation (Fontan Outcomes Registry Using CMR Examinations [FORCE]). Materials and Methods This retrospective study used 250 cardiac MRI examinations (November 2007-December 2022) from 13 institutions for training, validation, and testing. The pipeline contained three DL models: a classifier to identify short-axis cine stacks and two U-Net 3+ models for image cropping and segmentation. The automated segmentations were evaluated on the test set (<i>n</i> = 50) by using the Dice score. Volumetric and functional metrics derived from DL and ground truth manual segmentations were compared using Bland-Altman and intraclass correlation analysis. The pipeline was further qualitatively evaluated on 475 unseen examinations. Results There were acceptable limits of agreement (LOA) and minimal biases between the ground truth and DL end-diastolic volume (EDV) (bias: -0.6 mL/m<sup>2</sup>, LOA: -20.6 to 19.5 mL/m<sup>2</sup>) and end-systolic volume (ESV) (bias: -1.1 mL/m<sup>2</sup>, LOA: -18.1 to 15.9 mL/m<sup>2</sup>), with high intraclass correlation coefficients (ICCs > 0.97) and Dice scores (EDV, 0.91 and ESV, 0.86). There was moderate agreement for ventricular mass (bias: -1.9 g/m<sup>2</sup>, LOA: -17.3 to 13.5 g/m<sup>2</sup>) and an ICC of 0.94. There was also acceptable agreement for stroke volume (bias: 0.6 mL/m<sup>2</sup>, LOA: -17.2 to 18.3 mL/m<sup>2</sup>) and ejection fraction (bias: 0.6%, LOA: -12.2% to 13.4%), with high ICCs (>0.81). The pipeline achieved satisfactory segmentation in 68% of the 475 unseen examinations, while 26% needed minor adjustments, 5% needed major adjustments, and in 0.4%, the cropping model failed. Conclusion The DL pipeline can provide fast standardized segmentation for patients with single ventricle physiology across multiple centers. This pipeline can be applied to all cardiac MRI examinations in the FORCE registry. <b>Keywords:</b> Cardiac, Adults and Pediatrics, MR Imaging, Congenital, Volume Analysis, Segmentation, Quantification <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230132"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge. RSNA 2022 年颈椎骨折检测挑战赛获奖算法的性能。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230256
Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello
{"title":"Performance of the Winning Algorithms of the RSNA 2022 Cervical Spine Fracture Detection Challenge.","authors":"Ghee Rye Lee, Adam E Flanders, Tyler Richards, Felipe Kitamura, Errol Colak, Hui Ming Lin, Robyn L Ball, Jason Talbott, Luciano M Prevedello","doi":"10.1148/ryai.230256","DOIUrl":"10.1148/ryai.230256","url":null,"abstract":"<p><p>Purpose To evaluate and report the performance of the winning algorithms of the Radiological Society of North America Cervical Spine Fracture AI Challenge. Materials and Methods The competition was open to the public on Kaggle from July 28 to October 27, 2022. A sample of 3112 CT scans with and without cervical spine fractures (CSFx) were assembled from multiple sites (12 institutions across six continents) and prepared for the competition. The test set had 1093 scans (private test set: <i>n</i> = 789; mean age, 53.40 years ± 22.86 [SD]; 509 males; public test set: <i>n</i> = 304; mean age, 52.51 years ± 20.73; 189 males) and 847 fractures. The eight top-performing artificial intelligence (AI) algorithms were retrospectively evaluated, and the area under the receiver operating characteristic curve (AUC) value, F1 score, sensitivity, and specificity were calculated. Results A total of 1108 contestants composing 883 teams worldwide participated in the competition. The top eight AI models showed high performance, with a mean AUC value of 0.96 (95% CI: 0.95, 0.96), mean F1 score of 90% (95% CI: 90%, 91%), mean sensitivity of 88% (95% Cl: 86%, 90%), and mean specificity of 94% (95% CI: 93%, 96%). The highest values reported for previous models were an AUC of 0.85, F1 score of 81%, sensitivity of 76%, and specificity of 97%. Conclusion The competition successfully facilitated the development of AI models that could detect and localize CSFx on CT scans with high performance outcomes, which appear to exceed known values of previously reported models. Further study is needed to evaluate the generalizability of these models in a clinical environment. <b>Keywords:</b> Cervical Spine, Fracture Detection, Machine Learning, Artificial Intelligence Algorithms, CT, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230256"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139088849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of Radiomics in Predicting IDH Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis. 放射组学预测弥漫性胶质瘤 IDH 突变状态的准确性:双变量元分析
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.220257
Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri
{"title":"Accuracy of Radiomics in Predicting <i>IDH</i> Mutation Status in Diffuse Gliomas: A Bivariate Meta-Analysis.","authors":"Gianfranco Di Salle, Lorenzo Tumminello, Maria Elena Laino, Sherif Shalaby, Gayane Aghakhanyan, Salvatore Claudio Fanni, Maria Febi, Jorge Eduardo Shortrede, Mario Miccoli, Lorenzo Faggioni, Mirco Cosottini, Emanuele Neri","doi":"10.1148/ryai.220257","DOIUrl":"10.1148/ryai.220257","url":null,"abstract":"<p><p>Purpose To perform a systematic review and meta-analysis assessing the predictive accuracy of radiomics in the noninvasive determination of isocitrate dehydrogenase <i>(IDH</i>) status in grade 4 and lower-grade diffuse gliomas. Materials and Methods A systematic search was performed in the PubMed, Scopus, Embase, Web of Science, and Cochrane Library databases for relevant articles published between January 1, 2010, and July 7, 2021. Pooled sensitivity and specificity across studies were estimated. Risk of bias was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2, and methods were evaluated using the radiomics quality score (RQS). Additional subgroup analyses were performed according to tumor grade, RQS, and number of sequences used (PROSPERO ID: CRD42021268958). Results Twenty-six studies that included 3280 patients were included for analysis. The pooled sensitivity and specificity of radiomics for the detection of <i>IDH</i> mutation were 79% (95% CI: 76, 83) and 80% (95% CI: 76, 83), respectively. Low RQS scores were found overall for the included works. Subgroup analyses showed lower false-positive rates in very low RQS studies (RQS < 6) (meta-regression, <i>z</i> = -1.9; <i>P</i> = .02) compared with adequate RQS studies. No substantial differences were found in pooled sensitivity and specificity for the pure grade 4 gliomas group compared with the all-grade gliomas group (81% and 86% vs 79% and 79%, respectively) and for studies using single versus multiple sequences (80% and 77% vs 79% and 82%, respectively). Conclusion The pooled data showed that radiomics achieved good accuracy performance in distinguishing <i>IDH</i> mutation status in patients with grade 4 and lower-grade diffuse gliomas. The overall methodologic quality (RQS) was low and introduced potential bias. <b>Keywords:</b> Neuro-Oncology, Radiomics, Integration, Application Domain, Glioblastoma, IDH Mutation, Radiomics Quality Scoring <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e220257"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations. 法学硕士现在就来见你:ChatGPT 在巴西放射学和影像诊断学以及乳腺 X 射线照相术委员会考试中的表现。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230568
Hari Trivedi, Judy Wawira Gichoya
{"title":"The LLM Will See You Now: Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations.","authors":"Hari Trivedi, Judy Wawira Gichoya","doi":"10.1148/ryai.230568","DOIUrl":"10.1148/ryai.230568","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230568"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research. 苏格兰医学影像档案:影像相关研究的独特资源。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230466
Gary J Whitman, David J Vining
{"title":"The Scottish Medical Imaging Archive: A Unique Resource for Imaging-related Research.","authors":"Gary J Whitman, David J Vining","doi":"10.1148/ryai.230466","DOIUrl":"10.1148/ryai.230466","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230466"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels. 弱监督,强结果:用较少的注释标签实现高性能颅内出血检测
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230598
Kareem A Wahid, David Fuentes
{"title":"Weak Supervision, Strong Results: Achieving High Performance in Intracranial Hemorrhage Detection with Fewer Annotation Labels.","authors":"Kareem A Wahid, David Fuentes","doi":"10.1148/ryai.230598","DOIUrl":"10.1148/ryai.230598","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230598"},"PeriodicalIF":8.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing Data Is Essential for the Future of AI in Medical Imaging. 共享数据对医学影像领域人工智能的未来至关重要。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230337
Laura C Bell, Efrat Shimron
{"title":"Sharing Data Is Essential for the Future of AI in Medical Imaging.","authors":"Laura C Bell, Efrat Shimron","doi":"10.1148/ryai.230337","DOIUrl":"10.1148/ryai.230337","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230337"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139478907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans. 基于深度学习的头部 CT 扫描颅内出血检测的检查级监督。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230159
Jacopo Teneggi, Paul H Yi, Jeremias Sulam
{"title":"Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans.","authors":"Jacopo Teneggi, Paul H Yi, Jeremias Sulam","doi":"10.1148/ryai.230159","DOIUrl":"10.1148/ryai.230159","url":null,"abstract":"<p><p>Purpose To compare the effectiveness of weak supervision (ie, with examination-level labels only) and strong supervision (ie, with image-level labels) in training deep learning models for detection of intracranial hemorrhage (ICH) on head CT scans. Materials and Methods In this retrospective study, an attention-based convolutional neural network was trained with either local (ie, image level) or global (ie, examination level) binary labels on the Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage Challenge dataset of 21 736 examinations (8876 [40.8%] ICH) and 752 422 images (107 784 [14.3%] ICH). The CQ500 (436 examinations; 212 [48.6%] ICH) and CT-ICH (75 examinations; 36 [48.0%] ICH) datasets were employed for external testing. Performance in detecting ICH was compared between weak (examination-level labels) and strong (image-level labels) learners as a function of the number of labels available during training. Results On examination-level binary classification, strong and weak learners did not have different area under the receiver operating characteristic curve values on the internal validation split (0.96 vs 0.96; <i>P</i> = .64) and the CQ500 dataset (0.90 vs 0.92; <i>P</i> = .15). Weak learners outperformed strong ones on the CT-ICH dataset (0.95 vs 0.92; <i>P</i> = .03). Weak learners had better section-level ICH detection performance when more than 10 000 labels were available for training (average <i>f</i><sub>1</sub> = 0.73 vs 0.65; <i>P</i> < .001). Weakly supervised models trained on the entire RSNA dataset required 35 times fewer labels than equivalent strong learners. Conclusion Strongly supervised models did not achieve better performance than weakly supervised ones, which could reduce radiologist labor requirements for prospective dataset curation. <b>Keywords:</b> CT, Head/Neck, Brain/Brain Stem, Hemorrhage <i>Supplemental material is available for this article.</i> © RSNA, 2023 See also commentary by Wahid and Fuentes in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230159"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. 基于深度学习的脑磁共振成像序列识别,使用在大型多中心研究队列中训练的模型。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230095
Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth
{"title":"Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts.","authors":"Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth","doi":"10.1148/ryai.230095","DOIUrl":"10.1148/ryai.230095","url":null,"abstract":"<p><p>Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high <i>b</i> value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ<sup>2</sup> tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; <i>P</i> ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (<i>P</i> > .05). Conclusion The developed CNN (<i>www.github.com/neuroAI-HD/HD-SEQ-ID</i>) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. <b>Keywords:</b> MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230095"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. 开发、采购、实施和监控放射学中的人工智能工具:实用考虑因素。来自 ACR、CAR、ESR、RANZCR 和 RSNA 的多协会声明。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-01-01 DOI: 10.1148/ryai.230513
Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek
{"title":"Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA.","authors":"Adrian P Brady, Bibb Allen, Jaron Chong, Elmar Kotter, Nina Kottler, John Mongan, Lauren Oakden-Rayner, Daniel Pinto Dos Santos, An Tang, Christoph Wald, John Slavotinek","doi":"10.1148/ryai.230513","DOIUrl":"10.1148/ryai.230513","url":null,"abstract":"<p><p>Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. <i>This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513).</i> <b>Keywords:</b> Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230513"},"PeriodicalIF":9.8,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139513870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信