Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI. 扫描仪制造商、直肠内线圈使用和临床变量对多参数MRI深度学习辅助前列腺癌分类的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.230555
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"10.1148/ryai.230555","url":null,"abstract":"<p><p>Purpose To assess the effect of scanner manufacturer and scanning protocol on the performance of deep learning models to classify aggressiveness of prostate cancer (PCa) at biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC, and the full dataset)-affects model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The effect of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System score) on model performance was also evaluated. Results DL models were trained on 4328 bpMRI cases, and the best model achieved an AUC of 0.73 when trained and tested using data from all manufacturers. Held-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within- and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scanning protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and ERC use had a major effect on DL model performance and features. <b>Keywords:</b> Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD), Computer Applications-General (Informatics), Oncology <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Suri and Hsu in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy. 基于深度学习的Cine心脏MRI对齐应变检测杜氏肌营养不良患者纤维化心肌组织。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240303
Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt
{"title":"Deep Learning-based Aligned Strain from Cine Cardiac MRI for Detection of Fibrotic Myocardial Tissue in Patients with Duchenne Muscular Dystrophy.","authors":"Sven Koehler, Julian Kuhm, Tyler Huffaker, Daniel Young, Animesh Tandon, Florian André, Norbert Frey, Gerald Greil, Tarique Hussain, Sandy Engelhardt","doi":"10.1148/ryai.240303","DOIUrl":"10.1148/ryai.240303","url":null,"abstract":"<p><p>Purpose To develop a deep learning (DL) model that derives aligned strain values from cine (noncontrast) cardiac MRI and evaluate performance of these values to predict myocardial fibrosis in patients with Duchenne muscular dystrophy (DMD). Materials and Methods This retrospective study included 139 male patients with DMD who underwent cardiac MRI at a single center between February 2018 and April 2023. A DL pipeline was developed to detect five key frames throughout the cardiac cycle and respective dense deformation fields, allowing for phase-specific strain analysis across patients and from one key frame to the next. Effectiveness of these strain values in identifying abnormal deformations associated with fibrotic segments was evaluated in 57 patients (mean age [± SD], 15.2 years ± 3.1), and reproducibility was assessed in 82 patients by comparing the study method with existing feature-tracking and DL-based methods. Statistical analysis compared strain values using <i>t</i> tests, mixed models, and more than 2000 machine learning models; accuracy, F1 score, sensitivity, and specificity are reported. Results DL-based aligned strain identified five times more differences (29 vs five; <i>P</i> < .01) between fibrotic and nonfibrotic segments compared with traditional strain values and identified abnormal diastolic deformation patterns often missed with traditional methods. In addition, aligned strain values enhanced performance of predictive models for myocardial fibrosis detection, improving specificity by 40%, overall accuracy by 17%, and accuracy in patients with preserved ejection fraction by 61%. Conclusion The proposed aligned strain technique enables motion-based detection of myocardial dysfunction at noncontrast cardiac MRI, facilitating detailed interpatient strain analysis and allowing precise tracking of disease progression in DMD. <b>Keywords:</b> Pediatrics, Image Postprocessing, Heart, Cardiac, Convolutional Neural Network (CNN) Duchenne Muscular Dystrophy <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240303"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Language Processing for Everyone. 每个人的自然语言处理。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250218
Quirin D Strotzer
{"title":"Natural Language Processing for Everyone.","authors":"Quirin D Strotzer","doi":"10.1148/ryai.250218","DOIUrl":"10.1148/ryai.250218","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250218"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters. 从诊断报告中自动提取结构化数据的开放权重语言模型和检索增强生成:方法和参数的评估。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240551
Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese
{"title":"Open-Weight Language Models and Retrieval-Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters.","authors":"Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese","doi":"10.1148/ryai.240551","DOIUrl":"10.1148/ryai.240551","url":null,"abstract":"<p><p>Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weight language models (LMs) and retrieval-augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study used two datasets: 7294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2154 pathology reports annotated for <i>IDH</i> mutation status (January 2017-July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for accuracy of structured data extraction from reports. The effect of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best-performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and greater than 90% accuracy for extraction of <i>IDH</i> mutation status from pathology reports. The best model was medical fine-tuned Llama 3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% vs 75%; <i>P</i> < .001). Model quantization had minimal effect on performance. Few-shot prompting significantly improved accuracy (mean [±SD] increase, 32% ± 32; <i>P</i> = .02). RAG improved performance for complex pathology reports by a mean of 48% ± 11 (<i>P</i> = .001) but not for shorter radiology reports (-8% ± 31; <i>P</i> = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. <b>Keywords:</b> Large Language Models, Retrieval-Augmented Generation, Radiology, Pathology, Health Care Reports <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Tejani and Rauschecker in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240551"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI. 无监督深度学习在弥漫性胶质瘤血脑屏障渗漏检测中的应用。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240507
Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim
{"title":"Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI.","authors":"Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim","doi":"10.1148/ryai.240507","DOIUrl":"10.1148/ryai.240507","url":null,"abstract":"<p><p>Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (<i>K</i><sup>trans</sup>) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (<i>IDH</i>) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (<i>r</i> = 0.56; <i>P</i> < .001) with <i>K</i><sup>trans</sup>. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; <i>P</i> < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; <i>P</i> < .001), and higher structural similarity index (0.92 vs 0.87; <i>P</i> < .001) compared with <i>K</i><sup>trans</sup> maps. RLS maps also outperformed <i>K</i><sup>trans</sup> maps in predicting <i>IDH</i> mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; <i>P</i> = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. <b>Keywords:</b> Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240507"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography. 开发并验证用于 CT 血管造影检测颅内动脉瘤的模拟人工智能模型
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240140
Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jiake Dong, Hao Wang, Zhen Zhou, Fandong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang
{"title":"Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography.","authors":"Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jiake Dong, Hao Wang, Zhen Zhou, Fandong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang","doi":"10.1148/ryai.240140","DOIUrl":"10.1148/ryai.240140","url":null,"abstract":"<p><p>Purpose To evaluate a sham-artificial intelligence (AI) model acting as a placebo control for a standard-AI model for diagnosis of intracranial aneurysm. Materials and Methods This retrospective crossover, blinded, multireader, multicase study was conducted from November 2022 to March 2023. A sham-AI model with near-zero sensitivity and similar specificity to a standard AI model was developed using 16 422 CT angiography examinations. Digital subtraction angiography-verified CT angiographic examinations from four hospitals were collected, half of which were processed by standard AI and the others by sham AI to generate sequence A; sequence B was generated in the reverse order. Twenty-eight radiologists from seven hospitals were randomly assigned to either sequence and then assigned to the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with standard-AI assistance, and radiologists with sham-AI assistance were compared using sensitivity and specificity, and radiologists' susceptibility to sham AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61.0 years [IQR, 52.0-67.0]; 199 male), 50 of whom had aneurysms. Standard AI and sham AI performed as expected (sensitivity, 96.0% vs 0.0%; specificity, 82.0% vs 76.0%). The differences in sensitivity and specificity between standard AI-assisted and sham AI-assisted readings were 20.7% (95% CI: 15.8, 25.5 [superiority]) and 0.0% (95% CI: -2.0, 2.0 [noninferiority]), respectively. The difference between sham AI-assisted readings and radiologists alone was -2.6% (95% CI: -3.8, -1.4 [noninferiority]) for both sensitivity and specificity. After sham-AI suggestions, 5.3% (44 of 823) of true-positive and 1.2% (seven of 577) of false-negative results of radiologists alone were changed. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed sham-AI model compared with their unassisted performance. <b>Keywords:</b> CT Angiography, Vascular, Intracranial Aneurysm, Sham AI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Mayfield and Romero in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240140"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Is Brittle: We Need to Do Better. 人工智能很脆弱:我们需要做得更好。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250081
Abhinav Suri, William Hsu
{"title":"Artificial Intelligence Is Brittle: We Need to Do Better.","authors":"Abhinav Suri, William Hsu","doi":"10.1148/ryai.250081","DOIUrl":"10.1148/ryai.250081","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250081"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127952/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway. 两种基于深度学习的人工智能模型在筛查乳房x光片上的乳腺癌检测和定位性能
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240039
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
{"title":"Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.","authors":"Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind","doi":"10.1148/ryai.240039","DOIUrl":"10.1148/ryai.240039","url":null,"abstract":"<p><p>Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female patients; mean age, 59.2 years ± 5.8 [SD]) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and model B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% CIs were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC value was 0.93 (95% CI: 0.92, 0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611 of 741) of the screen-detected cancers at threshold 1 and 92.4% (685 of 741) at threshold 2. Model B identified 81.8% (606 of 741) at threshold 1 and 93.7% (694 of 741) at threshold 2. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56 of 68) of the interval cancers for model A and 79% (54 of 68) for model B. At the review, 21.6% (45 of 208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (<i>n</i> = 17) or with minimal signs of malignancy (<i>n</i> = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. <b>Keywords:</b> Breast, Mammography, Screening, Computed-aided Diagnosis <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Cadrin-Chênevert in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240039"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction. 用一个系统来统治所有人?自动数据提取的任务和数据特定考虑因素。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250175
Ali S Tejani, Andreas M Rauschecker
{"title":"One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction.","authors":"Ali S Tejani, Andreas M Rauschecker","doi":"10.1148/ryai.250175","DOIUrl":"10.1148/ryai.250175","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250175"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images. 看到看不见的:无监督学习如何从放射图像预测基因突变。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250243
Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki
{"title":"Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images.","authors":"Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki","doi":"10.1148/ryai.250243","DOIUrl":"10.1148/ryai.250243","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250243"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信