Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules. 肺癌预测模型在筛查发现的肺结节、偶然发现的肺结节和活检肺结节中的表现。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.230506
Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman
{"title":"Performance of Lung Cancer Prediction Models for Screening-detected, Incidental, and Biopsied Pulmonary Nodules.","authors":"Thomas Z Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J Lentz, Stephen Deppen, Eric L Grogan, Thomas A Lasko, Kim L Sandler, Fabien Maldonado, Bennett A Landman","doi":"10.1148/ryai.230506","DOIUrl":"10.1148/ryai.230506","url":null,"abstract":"<p><p>Purpose To evaluate the performance of eight lung cancer prediction models on patient cohorts with screening-detected, incidentally detected, and bronchoscopically biopsied pulmonary nodules. Materials and Methods This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose CT, incidentally detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. The area under the receiver operating characteristic curve of eight validated models, including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence (AI) on chest CT scans, longitudinal imaging AI, and multimodal approaches for prediction of lung cancer risk was assessed in nine cohorts (<i>n</i> = 898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions. Each model was implemented from their published literature, and each cohort was curated from primary data sources collected over periods from 2002 to 2021. Results No single predictive model emerged as the highest-performing model across all cohorts, but certain models performed better in specific clinical contexts. Single-time-point chest CT AI performed well for screening-detected nodules but did not generalize well to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively good performance on incidentally detected nodules. When applied to biopsied nodules, all models showed low performance. Conclusion Eight lung cancer prediction models failed to generalize well across clinical settings and sites outside of their training distributions. <b>Keywords:</b> Diagnosis, Classification, Application Domain, Lung <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Shao and Niu in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230506"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Artificial Intelligence Models to Clinical Practice: Challenges in Lung Cancer Prediction. 连接人工智能模型与临床实践:肺癌预测的挑战。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250080
Xiaonan Shao, Rong Niu
{"title":"Bridging Artificial Intelligence Models to Clinical Practice: Challenges in Lung Cancer Prediction.","authors":"Xiaonan Shao, Rong Niu","doi":"10.1148/ryai.250080","DOIUrl":"10.1148/ryai.250080","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250080"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI. 混合多维MRI前列腺组织微观结构分析的物理信息自编码器。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240167
Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto
{"title":"Physics-Informed Autoencoder for Prostate Tissue Microstructure Profiling with Hybrid Multidimensional MRI.","authors":"Batuhan Gundogdu, Aritrick Chatterjee, Milica Medved, Ulas Bagci, Gregory S Karczmar, Aytekin Oto","doi":"10.1148/ryai.240167","DOIUrl":"10.1148/ryai.240167","url":null,"abstract":"<p><p>Purpose To evaluate the performance of Physics-Informed Autoencoder (PIA), a self-supervised deep learning model, in measuring tissue-based biomarkers for prostate cancer (PCa) using hybrid multidimensional MRI. Materials and Methods This retrospective study introduces PIA, an emerging self-supervised deep learning model that integrates a three-compartment diffusion-relaxation model with hybrid multidimensional MRI. PIA was trained to encode the biophysical model into a deep neural network to predict measurements of tissue-specific biomarkers for PCa without extensive training data requirements. Comprehensive in silico and in vivo experiments, using histopathology measurements as the reference standard, were conducted to validate the model's efficacy in comparison to the traditional nonlinear least squares (NLLS) algorithm. PIA's robustness to noise was tested in in silico experiments with varying signal-to-noise ratio (SNR) conditions, and in vivo performance for estimating volume fractions was evaluated in 21 patients (mean age, 60 years ± 6.6 [SD]; all male) with PCa (71 regions of interest). Evaluation metrics included the intraclass correlation coefficient (ICC) and Pearson correlation coefficient. Results PIA predicted the reference standard tissue parameters with high accuracy, outperforming conventional NLLS methods, especially under noisy conditions (<i>r</i><sub>s</sub> = 0.80 vs 0.65, <i>P</i> < .001 for epithelium volume at SNR of 20:1). In in vivo validation, PIA's noninvasive volume fraction estimates matched quantitative histology (ICC, 0.94, 0.85, and 0.92 for epithelium, stroma, and lumen compartments, respectively; <i>P</i> < .001 for all). PIA's measurements strongly correlated with PCa aggressiveness (<i>r</i> = 0.75, <i>P</i> < .001). Furthermore, PIA ran 10 000 faster than NLLS (0.18 second vs 40 minutes per image). Conclusion PIA provided accurate prostate tissue biomarker measurements from MRI data with better robustness to noise and computational efficiency compared with the NLLS algorithm. The results demonstrate the potential of PIA as an accurate, noninvasive, and explainable artificial intelligence method for PCa detection. <b>Keywords:</b> Prostate, Stacked Auto-Encoders, Tissue Characterization, MR-Diffusion-weighted Imaging <i>Supplemental material is available for this article.</i> ©RSNA, 2025 See also commentary by Adams and Bressem in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240167"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Manuscript Reviewers: A Note of Thanks. 2024手稿审稿人:一封感谢信。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250163
Umar Mahmood, Charles E Kahn
{"title":"2024 Manuscript Reviewers: A Note of Thanks.","authors":"Umar Mahmood, Charles E Kahn","doi":"10.1148/ryai.250163","DOIUrl":"https://doi.org/10.1148/ryai.250163","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250163"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma. 基于序列mri的深度学习模型预测局部区域晚期鼻咽癌患者的生存。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230544
Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun
{"title":"A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.","authors":"Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun","doi":"10.1148/ryai.230544","DOIUrl":"10.1148/ryai.230544","url":null,"abstract":"<p><p>Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 patients with LA-NPC (779 male and 260 female patients; mean age, 44 years ± 11 [SD]) diagnosed between December 2011 and January 2016. A radiomics-clinical prognostic model (model RC) was developed using pre- and post-IC MRI acquisitions and other clinical factors using graph convolutional neural networks. The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 vs 0.53) and external (0.79 vs 0.62, both <i>P</i> < .001) testing cohorts. The 5-year DFS for the model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% vs 58.9%, <i>P</i> < .001). In high-risk patients, those who underwent CCRT had a higher 5-year DFS rate than those who did not (58.7% vs 28.6%, <i>P</i> = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not undergo CCRT (91.9% vs 81.3%, <i>P</i> = .19). Conclusion Serial MRI before and after IC can effectively help predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a graph convolutional network-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adapted therapy. <b>Keywords:</b> Nasopharyngeal Carcinoma, Deep Learning, Induction Chemotherapy, Serial MRI, MR Imaging, Radiomics, Prognosis, Radiation Therapy/Oncology, Head/Neck <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230544"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes. 评估人工智能病例评分随时间推移的变化对数字乳腺断层合成筛查结果的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-02-01 DOI: 10.1148/ryai.230597
Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant
{"title":"Evaluating the Impact of Changes in Artificial Intelligence-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.","authors":"Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant","doi":"10.1148/ryai.230597","DOIUrl":"10.1148/ryai.230597","url":null,"abstract":"<p><p>Purpose To evaluate the change in digital breast tomosynthesis artificial intelligence (DBT-AI) case scores over sequential screenings. Materials and Methods This retrospective review included 21 108 female patients (mean age ± SD, 58.1 years ± 11.5) with 31 741 DBT screening examinations performed at a single site from February 3, 2020, to September 12, 2022. Among 7000 patients with two or more DBT-AI screenings, 1799 had a 1-year follow-up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0 to 100. For each screening outcome (true positive [TP], false positive [FP], true negative [TN], false negative [FN]), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average, 75; range, 7-100; <i>n</i> = 41), and the lowest average case score was seen in TN examinations (average, 34; range, 0-100; <i>n</i> = 1640). The largest positive case score change was seen in TP examinations (mean case score change, 21.1; median case score change, 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found at symptomatic diagnostic imaging. Differences between TP and TN mean case score change (<i>P</i> < .001) and between TP and FP mean case score change (<i>P</i> = .02) were statistically significant. Conclusion Using the combination of DBT AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. <b>Keywords:</b> Mammography, Breast, Computer Aided Diagnosis (CAD) <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230597"},"PeriodicalIF":8.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
{"title":"RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"10.1148/ryai.240334","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. <b>Keywords:</b> Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation. 深度学习应用于扩散加权成像,无需病灶分割即可区分恶性与良性乳腺肿瘤
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240206
Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto
{"title":"Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.","authors":"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto","doi":"10.1148/ryai.240206","DOIUrl":"10.1148/ryai.240206","url":null,"abstract":"<p><p>Purpose To evaluate and compare the performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors at diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3-T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was performed with five <i>b</i> values (0, 200, 800, 1000, and 1500 sec/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small two-dimensional (2D) convolutional neural network (CNN), ResNet-18, EfficientNet-B0, and a three-dimensional (3D) CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MR images, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (augmentation A: random elastic deformation, augmentation B: random affine transformation and random noise, and augmentation C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age, 54.9 years ± 14.3 [SD]; all female) were analyzed. The 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: range, 0.83-0.88 vs 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC: 0.88) and the radiologists (AUC: 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% vs 72.1%, <i>P</i> = .64) or sensitivity (85.9% vs 98.8%, <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. <b>Keywords:</b> MR Imaging, Breast, Comparative Studies, Feature Detection, Diagnosis <i>Supplemental material is available for this article.</i> ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240206"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142677229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer. 结合生物学和磁共振成像数据驱动模型预测三阴性乳腺癌患者对新辅助化疗的反应
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240124
Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov
{"title":"Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer.","authors":"Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov","doi":"10.1148/ryai.240124","DOIUrl":"10.1148/ryai.240124","url":null,"abstract":"<p><p>Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple-negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson A Robust TNBC Evaluation FraMework to Improve Survival trial (ARTEMIS; ClinicalTrials.gov registration no. NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 years [range, 29-78 years]). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the concordance correlation coefficient values were 0.95 (95% CI: 0.90, 0.98) and 0.94 (95% CI: 0.87, 0.97), respectively. CNN-predicted TTC and TTV had an area under the receiver operating characteristic curve of 0.72 (95% CI: 0.34, 0.94) and 0.72 (95% CI: 0.40, 0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. <b>Keywords:</b> Triple-Negative Breast Cancer, Neoadjuvant Chemotherapy, Convolutional Neural Network, Biology-based Mathematical Model <i>Supplemental material is available for this article.</i> Clinical trial registration no. NCT02276443 ©RSNA, 2024 See also commentary by Mei and Huang in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240124"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms. 利用人工智能改进未破裂颅内动脉瘤的诊断。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240696
Shuncong Wang
{"title":"Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms.","authors":"Shuncong Wang","doi":"10.1148/ryai.240696","DOIUrl":"10.1148/ryai.240696","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 1","pages":"e240696"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信