Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.240050
Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell
{"title":"Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.","authors":"Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell","doi":"10.1148/ryai.240050","DOIUrl":"https://doi.org/10.1148/ryai.240050","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (June 2010 to January 2018) from 90 patients (mean age, 61 ± [SD] 9 years; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (<i>n</i> = 362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96, NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data and is available as a Python package via GitHub (https://github.com/cpwardell/Skellytour). Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240050"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.240229
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw
{"title":"Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.","authors":"Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw","doi":"10.1148/ryai.240229","DOIUrl":"https://doi.org/10.1148/ryai.240229","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a longitudinally-aware segmentation network (LAS-Net) that can quantify serial PET/CT images for pediatric patients with Hodgkin lymphoma. Materials and Methods This retrospective study included baseline (PET1) and interim (PET2) PET/CT images from 297 pediatric patients enrolled in two Children's Oncology Group clinical trials (AHOD1331 and AHOD0831). The internal dataset included 200 patients (enrolled between March 2015-August 2019; median age, 15.4 [IQR: 5.6, 22.0] years; 107 male), and the external testing dataset included 97 patients (enrolled between December 2009-January 2012; median age, 15.8 [IQR: 5.2, 21.4] years; 59 male). LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2. The model's lesion segmentation performance on PET1 images was evaluated using Dice coefficients and lesion detection performance on PET2 images was evaluated using F1 scores. Additionally, quantitative PET metrics, including metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in PET1, as well as qPET and ∆SUVmax in PET2, were extracted and compared against physician-derived measurements. Agreement between model and physician-derived measurements was quantified using Spearman correlation, and bootstrap resampling was employed for statistical analysis. Results LAS-Net detected residual lymphoma on PET2 scans with an F1 score of 0.61 (precision/recall: 0.62/0.60), outperforming all comparator methods (<i>P</i> < .01). For baseline segmentation, LAS-Net achieved a mean Dice score of 0.77. In PET quantification, LAS-Net's measurements of qPET, ∆SUVmax, MTV and TLG were strongly correlated with physician measurements, with Spearman's ρ values of 0.78, 0.80, 0.93 and 0.96, respectively. The quantification performance remained high, with a slight decrease, in an external testing cohort. Conclusion LAS-Net demonstrated significant improvements in quantifying PET metrics across serial scans in pediatric patients with Hodgkin lymphoma, highlighting the value of longitudinal awareness in evaluating multitime-point imaging datasets. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240229"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.240115
Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im
{"title":"Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly.","authors":"Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im","doi":"10.1148/ryai.240115","DOIUrl":"https://doi.org/10.1148/ryai.240115","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Fetal ventriculomegaly (VM) and its severity and associated central nervous system (CNS) abnormalities are important indicators of high risk for impaired neurodevelopmental outcomes. Recently, a novel fetal brain age prediction method using a 2D single-channel convolutional neural network (CNN) with multiplanar MRI slices showed the potential to detect fetuses with VM. The purpose of this study examines the diagnostic performance of deep learning-based fetal brain age prediction model to distinguish fetuses with VM (<i>n</i> = 317) from typically developing fetuses (<i>n</i> = 183), the severity of VM, and the presence of associated CNS abnormalities. The predicted age difference (PAD) was measured by subtracting predicted brain age from gestational age in fetuses with VM and typically development. PAD and absolute value of PAD (AAD) were compared between VM and typically developing fetuses. In addition, PAD and AAD were compared between subgroups by VM severity and the presence of associated CNS abnormalities in VM. Fetuses with VM showed significantly larger AAD than typically developing (<i>P</i> < .001), and fetuses with severe VM showed larger AAD than those with moderate VM (<i>P</i> = .004). Fetuses with VM and associated CNS abnormalities had significantly lower PAD than fetuses with isolated VM (<i>P</i> = .005). These findings suggest that fetal brain age prediction using the 2D single-channel CNN method has the clinical ability to assist in identifying not only the enlargement of the ventricles but also the presence of associated CNS abnormalities. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240115"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-19 DOI: 10.1148/ryai.230620
Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen
{"title":"Machine Learning and Deep Learning Models for Automated Protocoling of Emergency Brain MRI Using Text from Clinical Referrals.","authors":"Heidi J Huhtanen, Mikko J Nyman, Antti Karlsson, Jussi Hirvonen","doi":"10.1148/ryai.230620","DOIUrl":"https://doi.org/10.1148/ryai.230620","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate machine learning and deep learning-based models for automated protocoling of emergency brain MRI scans based on clinical referral text. Materials and Methods In this single-institution, retrospective study of 1953 emergency brain MRI referrals from January 2016 to January 2019, two neuroradiologists labeled the imaging protocol and use of contrast agent as the reference standard. Three machine learning algorithms (Naïve Bayes, support vector machine, and XGBoost) and two pretrained deep learning models (Finnish BERT and GPT-3.5) were developed to predict the MRI protocol and need for contrast agent. Each model was trained with three datasets (100% of training data, 50% of training data, and 50% + augmented training data). Prediction accuracy was assessed with test set. Results The GPT-3.5 models trained with 100% of the training data performed best in both tasks, achieving accuracy of 84% (95% CI: 80%-88%) for the correct protocol and 91% (95% CI: 88%-94%) for contrast. BERT had an accuracy of 78% (95% CI: 74%-82%) for the protocol and 89% (95% CI: 86%-92%) for contrast. The best machine learning model in the protocol task was XGBoost (accuracy 78% [95% CI: 73%-82%]), and in the contrast agent task support vector machine and XGBoost (accuracy 88% [95% CI: 84%-91%] for both). The accuracies of two nonneuroradiologists were 80%-83% in the protocol task and 89%-91% in the contrast medium task. Conclusion Machine learning and deep learning models demonstrated high performance in automatic protocoling emergency brain MRI scans based on text from clinical referrals. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230620"},"PeriodicalIF":8.1,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-05 DOI: 10.1148/ryai.240039
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
{"title":"Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.","authors":"Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind","doi":"10.1148/ryai.240039","DOIUrl":"https://doi.org/10.1148/ryai.240039","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female, mean age 59.2, SD = 5.8) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% confidence interval (CIs) were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC was 0.93 (95% CI: 0.92-0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611/741) of the screen-detected cancers at threshold 1 and 92.4% (685/741) at threshold 2. For model B, the numbers were 81.8% (606/741) and 93.7% (694/741), respectively. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56/68) of the interval cancers for model A and 79% (54/68) for B. At the review, 21.6% (45/208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (<i>n</i> = 17) or with minimal signs of malignancy (<i>n</i> = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240039"},"PeriodicalIF":8.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules. 肺癌预测模型在筛查发现的肺结节、偶然发现的肺结节和活检肺结节中的表现。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-05 DOI: 10.1148/ryai.230506
Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman
{"title":"Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules.","authors":"Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman","doi":"10.1148/ryai.230506","DOIUrl":"https://doi.org/10.1148/ryai.230506","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of eight lung cancer prediction models on patient cohorts with screening-detected, incidentally-detected, and bronchoscopically-biopsied pulmonary nodules. Materials and Methods This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose CT, incidentally, detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. The area under the receiver operating characteristic curve (AUC) of eight validated models including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence (AI) on chest CTs, longitudinal imaging AI, and multimodal approaches for prediction of lung cancer risk was assessed in 9 cohorts (<i>n</i> = 898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions. Each model was implemented from their published literature, and each cohort was curated from primary data sources collected over periods within 2002 to 2021. Results No single predictive model emerged as the highest-performing model across all cohorts, but certain models performed better in specific clinical contexts. Single timepoint chest CT AI performed well for screening-detected nodules but did not generalize well to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively good performance on incidentally-detected nodules. When applied to biopsied nodules, all models showed low performance. Conclusion Eight lung cancer prediction models failed to generalize well across clinical settings and sites outside of their training distributions. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230506"},"PeriodicalIF":8.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-05 DOI: 10.1148/ryai.240167
Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto
{"title":"Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI.","authors":"Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto","doi":"10.1148/ryai.240167","DOIUrl":"https://doi.org/10.1148/ryai.240167","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of Physics-Informed Autoencoder (PIA), a self-supervised deep learning model, in measuring tissue-based biomarkers for prostate cancer (PCa) using hybrid multidimensional MRI. Materials and Methods This retrospective study introduces PIA, a novel self-supervised deep learning model that integrates a three-compartment diffusion-relaxation model with hybrid multidimensional MRI. PIA was trained to encode the biophysical model into a deep neural network to predict measurements of tissue-specific biomarkers for PCa without extensive training data requirements. Comprehensive <i>in-silico</i> and <i>in-vivo</i> experiments, using histopathology measurements as the reference standard, were conducted to validate the model's efficacy in comparison to the traditional Non-Linear Least Squares (NLLS) algorithm. PIA's robustness to noise was tested in <i>in-silico</i> experiments with varying signal-to-noise ratio (SNR) conditions, and <i>in-vivo</i> performance for estimating volume fractions was evaluated in 21 patients (mean age 60 (SD:6.6) years; all male) with PCa (<i>n</i> = 71 regions of interest). Evaluation metrics included the intraclass correlation coefficient (ICC) and Pearson correlation coefficient. Results PIA predicted the reference standard tissue parameters with high accuracy, outperforming conventional NLLS methods, especially under noisy conditions (rs = 0.80 versus 0.65, <i>P</i> < .001 for epithelium volume at SNR = 20:1). In <i>in-vivo</i> validation, PIA's noninvasive volume fraction estimates matched quantitative histology (ICC = 0.94, 0.85 and 0.92 for epithelium, stroma, and lumen compartments, respectively, <i>P</i> < .001 for all). PIA's measurements strongly correlated with PCa aggressiveness (r = 0.75, <i>P</i> < .001). Furthermore, PIA ran 10,000 faster than NLLS (0.18 seconds versus 40 minutes per image). Conclusion PIA provided accurate prostate tissue biomarker measurements from MRI data with better robustness to noise and computational efficiency compared with the NLLS algorithm. The results demonstrate the potential of PIA as an accurate, noninvasive, and explainable AI method for PCa detection. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240167"},"PeriodicalIF":8.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma. 基于序列mri的深度学习模型预测局部区域晚期鼻咽癌患者的生存。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230544
Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun
{"title":"A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.","authors":"Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun","doi":"10.1148/ryai.230544","DOIUrl":"10.1148/ryai.230544","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 patients with LA-NPC (779 male and 260 female patients; mean age, 44 years ± 11 [SD]) diagnosed between December 2011 and January 2016. A radiomics-clinical prognostic model (model RC) was developed using pre- and post-IC MRI acquisitions and other clinical factors using graph convolutional neural networks. The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 vs 0.53) and external (0.79 vs 0.62, both <i>P</i> < .001) testing cohorts. The 5-year DFS for the model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% vs 58.9%, <i>P</i> < .001). In high-risk patients, those who underwent CCRT had a higher 5-year DFS rate than those who did not (58.7% vs 28.6%, <i>P</i> = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not undergo CCRT (91.9% vs 81.3%, <i>P</i> = .19). Conclusion Serial MRI before and after IC can effectively help predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a graph convolutional network-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adapted therapy. <b>Keywords:</b> Nasopharyngeal Carcinoma, Deep Learning, Induction Chemotherapy, Serial MRI, MR Imaging, Radiomics, Prognosis, Radiation Therapy/Oncology, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230544"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes. 评估人工智能病例评分随时间推移的变化对数字乳腺断层合成筛查结果的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230597
Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant
{"title":"Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.","authors":"Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant","doi":"10.1148/ryai.230597","DOIUrl":"10.1148/ryai.230597","url":null,"abstract":"<p><p>Purpose To evaluate the change in digital breast tomosynthesis artificial intelligence (DBT-AI) case scores over sequential screenings. Materials and Methods This retrospective review included 21 108 female patients (mean age ± SD, 58.1 years ± 11.5) with 31 741 DBT screening examinations performed at a single site from February 3, 2020, to September 12, 2022. Among 7000 patients with two or more DBT-AI screenings, 1799 had a 1-year follow-up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0 to 100. For each screening outcome (true positive [TP], false positive [FP], true negative [TN], false negative [FN]), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average, 75; range, 7-100; <i>n</i> = 41), and the lowest average case score was seen in TN examinations (average, 34; range, 0-100; <i>n</i> = 1640). The largest positive case score change was seen in TP examinations (mean case score change, 21.1; median case score change, 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found at symptomatic diagnostic imaging. Differences between TP and TN mean case score change (<i>P</i> < .001) and between TP and FP mean case score change (<i>P</i> = .02) were statistically significant. Conclusion Using the combination of DBT AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. <b>Keywords:</b> Mammography, Breast, Computer Aided Diagnosis (CAD) <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230597"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI. 扫描仪制造商、直肠内线圈使用和临床变量对多参数MRI深度学习辅助前列腺癌分类的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-22 DOI: 10.1148/ryai.230555
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"https://doi.org/10.1148/ryai.230555","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the impact of scanner manufacturer and scan protocol on the performance of deep learning models to classify prostate cancer (PCa) aggressiveness on biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5,478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC and the full dataset)-impacts model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The impact of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System [PI-RADS] score) on model performance was also evaluated. Results DL models were trained on 4,328 bpMRI cases, and the best model achieved AUC = 0.73 when trained and tested using data from all manufacturers. Hold-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within-and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scan protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and endorectal coil use had a major impact on DL model performance and features. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信