Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography. 开发并验证用于 CT 血管造影检测颅内动脉瘤的模拟人工智能模型
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-19 DOI: 10.1148/ryai.240140
Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jia-Ke Dong, Hao Wang, Zhen Zhou, Fan Dong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang
{"title":"Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography.","authors":"Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jia-Ke Dong, Hao Wang, Zhen Zhou, Fan Dong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang","doi":"10.1148/ryai.240140","DOIUrl":"https://doi.org/10.1148/ryai.240140","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate a Sham-AI model acting as a placebo control for a Standard-AI model for intracranial aneurysm diagnosis. Materials and Methods This retrospective crossover, blinded, multireader multicase study was conducted from November 2022 to March 2023. A Sham-AI model with near-zero sensitivity and similar specificity to a Standard-AI model was developed using 16,422 CT angiography (CTA) examinations. Digital subtraction angiography-verified CTA examinations from four hospitals were collected, half of which were processed by Standard-AI and the others by Sham-AI to generate Sequence A; Sequence B was generated reversely. Twenty-eight radiologists from seven hospitals were randomly assigned with either sequence, and then assigned with the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with Standard-AI-assisted, and radiologists with Sham-AI-assisted were compared using sensitivity and specificity, and radiologists' susceptibility to Sham-AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61 (IQR, 52.0-67.0) years; 199 male), 50 of which had aneurysms. Standard-AI and Sham-AI performed as expected (sensitivity: 96.0% versus 0.0%, specificity: 82.0% versus 76.0%). The differences in sensitivity and specificity between Standard-AI-assisted and Sham-AIassisted readings were +20.7% (95%CI: 15.8%-25.5%, superiority) and 0.0% (95%CI: -2.0%-2.0%, noninferiority), respectively. The difference between Sham-AI-assisted readings and radiologists alone was-2.6% (95%CI: -3.8%--1.4%, noninferiority) for both sensitivity and specificity. 5.3% (44/823) of true-positive and 1.2% (7/577) of false-negative results of radiologists alone were changed following Sham-AI suggestions. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed Sham-AI model compared with their unassisted performance. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240140"},"PeriodicalIF":8.1,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-Weight Language Models and Retrieval Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-12 DOI: 10.1148/ryai.240551
Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese
{"title":"Open-Weight Language Models and Retrieval Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters.","authors":"Mohamed Sobhi Jabal, Pranav Warman, Jikai Zhang, Kartikeye Gupta, Ayush Jain, Maciej Mazurowski, Walter Wiggins, Kirti Magudia, Evan Calabrese","doi":"10.1148/ryai.240551","DOIUrl":"https://doi.org/10.1148/ryai.240551","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weights language models (LMs) and retrieval augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study utilized two datasets: 7,294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2,154 pathology reports annotated for <i>IDH</i> mutation status (January 2017 to July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for structured data extraction accuracy from reports. The impact of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and over 90% for <i>IDH</i> mutation status extraction from pathology reports. The best model was medical finetuned llama3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% versus 75%; <i>P</i> < .001). Model quantization had minimal impact on performance. Few-shot prompting significantly improved accuracy (mean increase: 32% ± 32%, <i>P</i> = .02). RAG improved performance for complex pathology reports +48% ± 11% (<i>P</i> = .001), but not for shorter radiology reports-8% ± 31% (<i>P</i> = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240551"},"PeriodicalIF":8.1,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Large Language Models with Retrieval-augmented Generation: A Radiology-specific Approach.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-12 DOI: 10.1148/ryai.240313
Dane A Weinert, Andreas M Rauschecker
{"title":"Enhancing Large Language Models with Retrieval-augmented Generation: A Radiology-specific Approach.","authors":"Dane A Weinert, Andreas M Rauschecker","doi":"10.1148/ryai.240313","DOIUrl":"https://doi.org/10.1148/ryai.240313","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Retrieval-augmented generation (RAG) is a strategy to improve performance of large language models (LLMs) by providing the LLM with an updated corpus of knowledge that can be used for answer generation in real-time. RAG may improve LLM performance and clinical applicability in radiology by providing citable, up-to-date information without requiring model fine-tuning. In this retrospective study, a radiology-specific RAG was developed using a vector database of 3,689 <i>RadioGraphics</i> articles published from January 1999 to December 2023. Performance of 5 LLMs with and without RAG on a 192-question radiology examination was compared. RAG significantly improved examination scores for GPT-4 (81.2% versus 75.5%, <i>P</i> = .04) and Command R+ (70.3% versus 62.0%, <i>P</i> = .02), but not for Claude Opus, Mixtral, or Gemini 1.5 Pro. RAG-System performed significantly better than pure LLMs on a 24-question subset directly sourced from <i>RadioGraphics</i> (85% versus 76%, <i>P</i> = .03). The RAG-System retrieved 21/24 (87.5%, <i>P</i> < .001) relevant <i>RadioGraphics</i> references cited in the examination's answer explanations and successfully cited them in 18/21 (85.7%, <i>P</i> < .001) outputs. The results suggest that RAG is a promising approach to enhance LLM capabilities for radiology knowledge tasks, providing transparent, domain-specific information retrieval. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240313"},"PeriodicalIF":8.1,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-12 DOI: 10.1148/ryai.240287
John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe
{"title":"External Testing of a Commercial AI Algorithm for Breast Cancer Detection at Screening Mammography.","authors":"John Brandon Graham-Knight, Pengkun Liang, Wenna Lin, Quinn Wright, Hua Shen, Colin Mar, Janette Sam, Rasika Rajapakshe","doi":"10.1148/ryai.240287","DOIUrl":"https://doi.org/10.1148/ryai.240287","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To test a commercial artificial intelligence (AI) system for breast cancer detection at the BC Cancer Breast Screening Program. Materials and Methods In this retrospective study of 136,700 women (age: µ = 58.8, σ = 9.4, M = 59.0, IQR = 14.0) who underwent digital mammography screening in British Columbia, Canada between February 2019 and January 2020, breast cancer detection performance of a commercial AI algorithm was stratified by demographic, clinical, and imaging features and evaluated using the receiver operating characteristic curve (AUC), and AI performance was compared with radiologists using sensitivity and specificity. Results At 1-year follow-up, the AUC of the AI algorithm was 0.93 (95% CI: 0.92-0.94) for breast cancer detection. Statistically significant differences were found for mammograms across radiologist-assigned BI-RADS breast densities-A: 0.96 (0.94-0.91); B: 0.94 (0.92-0.95); C: 0.93 (0.91-0.95) and D: 0.84 (0.76-0.91) (A<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .002; B<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .009; C<sub>AUC</sub> > D<sub>AUC</sub>, <i>P</i> = .02). The AI showed higher performance for mammograms with architectural distortion (0.96, 0.94-0.98) versus without (0.92, 0.90-0.93, <i>P</i> = .003) and lower performance for mammograms with calcification (0.87, 0.85-0.90) versus without (0.92, 0.91-0.94, <i>P</i> < .001). Sensitivity of radiologists (92.6 ± 1.0%) exceeded the AI algorithm (89.4 ± 1.1%; <i>P</i> =.01), but there was no evidence of difference at 2-year follow-up (83.5 ± 1.2% versus 84.3 ± 1.2%; <i>P</i> = .69). Conclusion The tested commercial AI algorithm is generalizable for a large external breast cancer screening cohort from Canada but showed different performance for some subgroups, including architectural distortion or calcification in the image. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240287"},"PeriodicalIF":8.1,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pipeline for Automated Quality Control of Chest Radiographs.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-05 DOI: 10.1148/ryai.240003
Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall
{"title":"A Pipeline for Automated Quality Control of Chest Radiographs.","authors":"Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall","doi":"10.1148/ryai.240003","DOIUrl":"https://doi.org/10.1148/ryai.240003","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> This article presents a suite of quality control tools for chest radiographs based on traditional and artificial intelligence methods, developed and tested with data from 39 centers in 7 countries. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240003"},"PeriodicalIF":8.1,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240050
Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell
{"title":"Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.","authors":"Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell","doi":"10.1148/ryai.240050","DOIUrl":"10.1148/ryai.240050","url":null,"abstract":"<p><p>Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (from June 2010 to January 2018) from 90 patients (mean age, 61 years ± 9 [SD]; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96; NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low-density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data; it is available as a Python package via GitHub <i>(https://github.com/cpwardell/Skellytour)</i>. <b>Keywords:</b> CT, Informatics, Skeletal-Axial, Demineralization-Bone, Comparative Studies, Segmentation, Supervised Learning, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Khosravi and Rouzrokh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240050"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.259001
Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang
{"title":"Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.","authors":"Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang","doi":"10.1148/ryai.259001","DOIUrl":"https://doi.org/10.1148/ryai.259001","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e259001"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation. 骨骼开胃:Skellytour 为稳健的骨骼分割提供了平台。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250057
Bardia Khosravi, Pouria Rouzrokh
{"title":"Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation.","authors":"Bardia Khosravi, Pouria Rouzrokh","doi":"10.1148/ryai.250057","DOIUrl":"https://doi.org/10.1148/ryai.250057","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250057"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240443
Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang
{"title":"CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.","authors":"Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang","doi":"10.1148/ryai.240443","DOIUrl":"10.1148/ryai.240443","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240443"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240115
Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im
{"title":"Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly.","authors":"Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im","doi":"10.1148/ryai.240115","DOIUrl":"10.1148/ryai.240115","url":null,"abstract":"<p><p>Fetal ventriculomegaly (VM) and its severity and associated central nervous system (CNS) abnormalities are important indicators of high risk for impaired neurodevelopmental outcomes. Recently, a novel fetal brain age prediction method using a two-dimensional (2D) single-channel convolutional neural network (CNN) with multiplanar MRI sections showed the potential to detect fetuses with VM. This study examines the diagnostic performance of a deep learning-based fetal brain age prediction model to distinguish fetuses with VM (<i>n</i> = 317) from typically developing fetuses (<i>n</i> = 183), the severity of VM, and the presence of associated CNS abnormalities. The predicted age difference (PAD) was measured by subtracting the predicted brain age from the gestational age in fetuses with VM and typical development. PAD and absolute value of PAD (AAD) were compared between VM and typically developing fetuses. In addition, PAD and AAD were compared between subgroups by VM severity and the presence of associated CNS abnormalities in VM. Fetuses with VM showed significantly larger AAD than typically developing fetuses (<i>P</i> < .001), and fetuses with severe VM showed larger AAD than those with moderate VM (<i>P</i> = .004). Fetuses with VM and associated CNS abnormalities had significantly lower PAD than fetuses with isolated VM (<i>P</i> = .005). These findings suggest that fetal brain age prediction using the 2D single-channel CNN method has the clinical ability to assist in identifying not only the enlargement of the ventricles but also the presence of associated CNS abnormalities. <b>Keywords:</b> MR-Fetal (Fetal MRI), Brain/Brain Stem, Fetus, Supervised Learning, Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms <i>Supplemental material is available for this article.</i> ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240115"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信