Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Development of an Integrated Deep Learning Approach for Detecting Fetal Brain Abnormalities in Routine Second Trimester Ultrasound Scan: A Multicenter Study. 在常规妊娠中期超声扫描中检测胎儿脑异常的集成深度学习方法的发展:一项多中心研究。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-04-08 DOI: 10.1148/ryai.250737
Ruben Ramirez Zegarra, Alessandra Familiari, Andrea Dall'Asta, Chiara Di Ilio, Beatrice Valentini, Tiziana Fanelli, Paolo Volpe, Monica Minopoli, Basky Thilaganathan, Edwin Quarello, Ricciarda Raffaelli, Julia Binder, Veronica Falcone, Gianpaolo Grisolia, Giuseppe Rizzo, Gianluca Gragnaniello, Huong E Tran, Luca Boldrini, Tullio Ghi
{"title":"Development of an Integrated Deep Learning Approach for Detecting Fetal Brain Abnormalities in Routine Second Trimester Ultrasound Scan: A Multicenter Study.","authors":"Ruben Ramirez Zegarra, Alessandra Familiari, Andrea Dall'Asta, Chiara Di Ilio, Beatrice Valentini, Tiziana Fanelli, Paolo Volpe, Monica Minopoli, Basky Thilaganathan, Edwin Quarello, Ricciarda Raffaelli, Julia Binder, Veronica Falcone, Gianpaolo Grisolia, Giuseppe Rizzo, Gianluca Gragnaniello, Huong E Tran, Luca Boldrini, Tullio Ghi","doi":"10.1148/ryai.250737","DOIUrl":"https://doi.org/10.1148/ryai.250737","url":null,"abstract":"<p><p>Purpose To develop and validate an anatomy-aware, two-stage, end-to-end deep learning (DL) pipeline for fetal brain abnormality automated detection on standardized second-trimester brain US images. Materials and Methods This retrospective multicenter study included 319 fetal brain images (218 normal, 101 abnormal) between 19+0 and 23+6 weeks of gestation from nine international fetal medicine centers, each with paired standard transventricular and transcerebellar axial plane images acquired during second-trimester US between January 2010 and December 2022. Abnormalities were confirmed by neonatal imaging or autopsy. Images were annotated for six key brain regions by two experienced fetal medicine specialists. An anatomy-aware, two-stage DL pipeline was developed, consisting of a YOLOv5-based object detector followed by a classification network using a Mini-ResNet feature extractor within a HexaNet architecture. The pipeline classified each image as normal or abnormal. Object detection performance was evaluated using mean average precision at an intersection-over-union threshold of 0.5 (mAP@0.5). Classification performance was assessed using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and F1-score. Results The object detection model achieved a mAP@0.5 of 0.93 (95% CI 0.90, 0.96) on the test dataset. The classification model achieved an AUC of 0.96 (95% CI 0.90, 1.00), a sensitivity of 87% (95% CI 67, 100) [13/15], a specificity of 91% (95% CI 79, 100) [29/32], and an F1-score of 0.84 (95% CI 0.67, 0.96) for distinguishing normal from abnormal fetal brain images. Conclusion The developed model achieved high diagnostic performance for the detection of brain anomalies in routine fetal second-trimester US. ©RSNA, 2026.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250737"},"PeriodicalIF":13.2,"publicationDate":"2026-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teaching MRI to Read the Report: Text-Image Alignment for Brain MRI at Scale. 教MRI阅读报告:大规模脑MRI的文本-图像对齐。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.250962
Satyam Ghodasara
{"title":"Teaching MRI to Read the Report: Text-Image Alignment for Brain MRI at Scale.","authors":"Satyam Ghodasara","doi":"10.1148/ryai.250962","DOIUrl":"10.1148/ryai.250962","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 2","pages":"e250962"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Machine Learning Model Using Digitized US Features for Classifying Complex Thyroid Nodules. 使用数字化US特征分类复杂甲状腺结节的可解释机器学习模型。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.250383
Zhuyao Li, Yu Yan, Xiang Li, Kezhou Wu, Xiubo Lu
{"title":"Interpretable Machine Learning Model Using Digitized US Features for Classifying Complex Thyroid Nodules.","authors":"Zhuyao Li, Yu Yan, Xiang Li, Kezhou Wu, Xiubo Lu","doi":"10.1148/ryai.250383","DOIUrl":"10.1148/ryai.250383","url":null,"abstract":"<p><p>Purpose To develop a digitized integrated feature-based interpretable machine learning classification model to accurately recognize complex thyroid nodules while efficiently diagnosing conventional thyroid nodules (thyroid nodules with typical benign or malignant US features). Materials and Methods Thyroid US images depicting pathologically confirmed nodules were retrospectively collected from seven medical centers in China (January 2011-December 2021). An interpretable classification model consisting of two independent masks was developed and defined as \"UltraMC.\" The front-end network was trained to identify conventional thyroid nodules using four digitized features, and the back-end network collected nodules classified as benign in the previous framework for secondary analysis to clarify their final diagnosis. UltraMC performance was evaluated using accuracy, sensitivity, specificity, and confusion matrices. Results The total dataset included 73 826 patients with thyroid US images (mean age, 45.56 years ± 11.21 [SD]; 54 398 female). Diagnostic accuracy of the front-end network for detecting conventional thyroid nodules was 92.9% (13 718 of 14 765), and accuracy of the back-end network for classifying mummified thyroid nodules (MTNs) was 88.5% (652 of 737). The overall diagnostic accuracy of the US MTN classification model (UltraMC) was 91.8% (14 228 of 15 502). The areas under the receiver operating characteristic curve of the front-end network and UltraMC in identifying conventional thyroid nodules were 0.98 (95% CI: 0.98, 0.98) and 0.96 (95% CI: 0.96, 0.97), respectively. Conclusion The proposed two-layer interpretable classification model achieved high diagnostic accuracy for both conventional and mummified thyroid nodules. These findings demonstrate that digitized US features integrated into a white box framework can effectively support classification of complex thyroid nodules. <b>Keywords:</b> Ultrasound, Head/Neck, Thyroid, Diagnosis, Convolutional Neural Network (CNN), K-Means, Random Forest, Thyroid Nodule, Interpretable, Digital, Mummified Thyroid Nodules <i>Supplemental material is available for this article.</i> © RSNA, 2026.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250383"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fetal Brain in B-Mode: AIRFRAME Model Takes on Posterior Fossa Malformation Detection. 胎儿脑b模式:AIRFRAME模型用于后窝畸形检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.260042
Patricia Piazza Rafful
{"title":"Fetal Brain in B-Mode: AIRFRAME Model Takes on Posterior Fossa Malformation Detection.","authors":"Patricia Piazza Rafful","doi":"10.1148/ryai.260042","DOIUrl":"10.1148/ryai.260042","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 2","pages":"e260042"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147356701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor's Recognition Awards. 编辑嘉许奖。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.260178
Charles E Kahn
{"title":"Editor's Recognition Awards.","authors":"Charles E Kahn","doi":"10.1148/ryai.260178","DOIUrl":"https://doi.org/10.1148/ryai.260178","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 2","pages":"e260178"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147436150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Deep Learning Algorithm for Posterior Fossa Abnormality Recognition on First-Trimester US Screening Scans: AIRFRAME Study Part 1. 一种用于妊娠早期US筛查扫描后窝异常识别的深度学习算法的发展:AIRFRAME研究第1部分。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.250394
Alessandra Familiari, Chiara Di Ilio, Andrea Dall'Asta, Enrico Corno, Ruben Ramirez Zegarra, Elvira Di Pasquo, Tiziana Fanelli, Monica Minopoli, Basky Thilaganathan, Carolina Scala, Federico Prefumo, Ricciarda Raffaelli, Alessandra Bovino, Edwin Quarello, Julia Binder, Veronica Falcone, Gianpaolo Grisolia, Jayshree Ramkrishna, Simon Meagher, Huong Elena Tran, Carlotta Bizzarri, Marica Vagni, Luca Boldrini, Paolo Volpe, Tullio Ghi
{"title":"Development of a Deep Learning Algorithm for Posterior Fossa Abnormality Recognition on First-Trimester US Screening Scans: AIRFRAME Study Part 1.","authors":"Alessandra Familiari, Chiara Di Ilio, Andrea Dall'Asta, Enrico Corno, Ruben Ramirez Zegarra, Elvira Di Pasquo, Tiziana Fanelli, Monica Minopoli, Basky Thilaganathan, Carolina Scala, Federico Prefumo, Ricciarda Raffaelli, Alessandra Bovino, Edwin Quarello, Julia Binder, Veronica Falcone, Gianpaolo Grisolia, Jayshree Ramkrishna, Simon Meagher, Huong Elena Tran, Carlotta Bizzarri, Marica Vagni, Luca Boldrini, Paolo Volpe, Tullio Ghi","doi":"10.1148/ryai.250394","DOIUrl":"10.1148/ryai.250394","url":null,"abstract":"<p><p>Purpose To develop a deep learning algorithm to automatically assess the posterior fossa on first-trimester US screening scans and identify open spina bifida (OSB) and cystic posterior fossa (CPF) anomalies. Materials and Methods This was the retrospective part of an international study involving 10 fetal medicine centers. Normal and abnormal (OSB, CPF anomaly) midsagittal fetal brain US images acquired between 11 and 14 weeks of gestation (July 2009-January 2024) with confirmed diagnosis at follow-up were evaluated. Images were manually annotated to delineate the posterior fossa. The dataset was split into a training/validation set (70%) and internal test set (30%). Three convolutional neural networks were trained via threefold cross-validation on the training/validation set, with predictions on the internal test set obtained by ensemble averaging across folds. Model performance in detecting OSB and CPF anomalies was evaluated for the whole cohort and for fetuses with OSB or CPF anomalies separately. Results Images from 251 fetuses were analyzed (mean gestational age [±SD], 12.7 weeks ± 0.65; 150 normal and 101 abnormal [43 OSB and 58 CPF anomalies] images). On the internal test, the MobileNetV3 Large Weights achieved the best performance: area under the receiver operating characteristic curve, 0.94 (95% CI: 0.88, 0.99); accuracy, 88% (67 of 76); recall, 81% (25 of 31); specificity, 93% (42 of 45); precision, 89% (25 of 28); negative predictive value, 88% (42 of 48); and F1 score, 0.85. OSB was classified more accurately (93% [52 of 56] vs 88% [57 of 65]; <i>P</i> = .38) and with higher recall (91% [10 of 11] vs 75% [15 of 20]), although the difference was not significant (<i>P</i> = .38). Conclusion MobileNetV3 Large Weights accurately assessed the fetal posterior fossa between 11 and 14 weeks of gestation, distinguishing normal images from those showing OSB or CPF anomalies. Clinical trial registration no. NCT0579047 <b>Keywords:</b> Artificial Intelligence, First Trimester Ultrasound Screening, Fetal Brain Anomalies, Deep Learning <i>Supplemental material is available for this article.</i> © RSNA, 2026 See also commentary by Rafful in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250394"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146012556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agentic AI in Radiology: Evolution from Large Language Models to Future Clinical Integration. 放射学中的人工智能:从大型语言模型到未来临床集成的演变。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.250651
Bardia Khosravi, Pouria Rouzrokh, Tugba Akinci D'Antonoli, Mana Moassefi, Shahriar Faghani, Aawez Mansuri, Keno Bressem, Ali Tejani, Judy Gichoya
{"title":"Agentic AI in Radiology: Evolution from Large Language Models to Future Clinical Integration.","authors":"Bardia Khosravi, Pouria Rouzrokh, Tugba Akinci D'Antonoli, Mana Moassefi, Shahriar Faghani, Aawez Mansuri, Keno Bressem, Ali Tejani, Judy Gichoya","doi":"10.1148/ryai.250651","DOIUrl":"10.1148/ryai.250651","url":null,"abstract":"<p><p>The introduction of foundational models, specifically large language models, has promised a health care transformation. However, the field is rapidly evolving toward autonomous agent systems, defined as artificial intelligence (AI) entities that perceive and react to their environment to achieve specific goals-representing a paradigm shift from passive information retrieval to proactive, goal-oriented clinical assistance. Agentic AI systems transcend static knowledge limitations through core capabilities including persistent memory systems that maintain context across patient encounters, knowledge retrieval tools connecting to medical repositories through retrieval-augmented generation techniques, and computer use functionality enabling navigation of clinical software interfaces. Agentic workflows introduce sophisticated coordination mechanisms including hierarchical, collaborative, and sequential patterns demonstrating superior performance compared with single-agent approaches. Multiagent systems can autonomously coordinate entire clinical workflows across the entire radiology life cycle, from preacquisition protocol optimization through initial image analysis, specialized tool deployment, and preliminary report generation. However, successful clinical deployment requires systematic consideration of complexity thresholds, economic sustainability, cybersecurity frameworks, bias mitigation strategies, and appropriate governance structures. Critical challenges include managing the probabilistic nature of underlying models within deterministic clinical workflows, ensuring adequate human supervision, and preventing overcomplication of established processes. A structured four-phase implementation roadmap addresses these considerations through incremental progression from low-risk automation to comprehensive workflow orchestration while maintaining rigorous safety standards. As foundation models advance and interoperability standards mature, agentic AI will reshape radiology practice paradigms. Success depends on resolving stakeholder responsibility questions while orchestrating technological capabilities with clinical accountability, ensuring autonomous systems augment rather than replace professional judgment in pursuit of improved patient outcomes. <b>Keywords:</b> Informatics, Named Entity Recognition, Patient Scheduling/No-Show Prediction, Resource Allocation, Impact of AI on Education, Artificial Intelligence, Large Language Models, Agentic AI, Multi-Agent Systems, Radiology Workflow, Clinical Decision Support, Health Care Automation © RSNA, 2026.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250651"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Turning Routine CT into Reference Values for Quantitative Radiology. 使常规CT成为定量放射学的参考值。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.251130
Yuan Chai, Yu Shi
{"title":"Turning Routine CT into Reference Values for Quantitative Radiology.","authors":"Yuan Chai, Yu Shi","doi":"10.1148/ryai.251130","DOIUrl":"10.1148/ryai.251130","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 2","pages":"e251130"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146158806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Label Noise from Large Language Model-generated Annotations on Evaluation of Diagnostic Model Performance. 大型语言模型生成注释的标签噪声对诊断模型性能评价的影响。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.250477
Mohammadreza Chavoshi, Hari Trivedi, Aawez Mansuri, Janice Newsome, Chiratidzo Rudado Sanyika, Rohan Satya Isaac, Frank Li, Theo Dapamede, Judy Wawira Gichoya
{"title":"Impact of Label Noise from Large Language Model-generated Annotations on Evaluation of Diagnostic Model Performance.","authors":"Mohammadreza Chavoshi, Hari Trivedi, Aawez Mansuri, Janice Newsome, Chiratidzo Rudado Sanyika, Rohan Satya Isaac, Frank Li, Theo Dapamede, Judy Wawira Gichoya","doi":"10.1148/ryai.250477","DOIUrl":"10.1148/ryai.250477","url":null,"abstract":"<p><p>Purpose To systematically examine how large language model (LLM)-generated label noise impacts real-world evaluation of artificial intelligence (AI) binary classification model performance. Materials and Methods A simulation framework was developed to evaluate how LLM label errors affect estimated model performance. A synthetic dataset (10 000 cases) was generated across low- (10% and 30%) and high-prevalence (70% and 90%) conditions. LLM sensitivity and specificity values varied independently from 90% to 100%. AI binary classification models were simulated, with true performance ranging from 90% to 100% for sensitivity and specificity. Apparent performance was calculated with LLM-generated labels as the reference standard. Best- and worst-case performance bounds were calculated analytically, and empirical uncertainty distributions were obtained via Monte Carlo trials. Results Apparent performance was highly sensitive to LLM label quality, with estimation bias strongly modulated by disease prevalence. In low-prevalence settings, small reductions in LLM specificity substantially underestimated model sensitivity. For example, at 10% prevalence, an LLM with 90% specificity yielded an apparent sensitivity of ~53% despite being a perfect model. In high-prevalence conditions, LLM sensitivity reduction led to model specificity underestimation. At 90% prevalence, lowering LLM sensitivity from 100% to 90% reduced apparent specificity from 100% to ~53%, despite perfect true specificity. Monte Carlo simulations revealed consistent downward bias, with apparent values often falling below the true model performance even when within theoretical error bounds. Conclusion LLM-generated labels can introduce systematic prevalence-dependent bias into model evaluation. In low-prevalence tasks, ensuring high LLM specificity during label extraction was critical, as false-positive labels disproportionately biased estimated sensitivity and led to model performance underestimation. <b>Keywords:</b> Large Language Models, Report Labeling, Model Deployment, Diagnostic Performance, Observer Performance, Outcomes Analysis <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Maiter and Zapaishchykova in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250477"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13019331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Cost of Convenience: How Labeling Errors by Large Language Models Can Distort Evaluations of AI Performance. 便利的代价:大型语言模型的标签错误如何扭曲对人工智能性能的评估。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2026-03-01 DOI: 10.1148/ryai.251133
Ahmed Maiter, Anna Zapaishchykova
{"title":"The Cost of Convenience: How Labeling Errors by Large Language Models Can Distort Evaluations of AI Performance.","authors":"Ahmed Maiter, Anna Zapaishchykova","doi":"10.1148/ryai.251133","DOIUrl":"10.1148/ryai.251133","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"8 2","pages":"e251133"},"PeriodicalIF":13.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书