Radiology-Artificial Intelligence最新文献

筛选
英文 中文
AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study. 人工智能整合筛查取代乳房 X 光片双读:全人口准确性和可行性研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-09-04 DOI: 10.1148/ryai.230529
Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen
{"title":"AI-integrated Screening to Replace Double Reading of Mammograms: A Population-wide Accuracy and Feasibility Study.","authors":"Mohammad T Elhakim, Sarah W Stougaard, Ole Graumann, Mads Nielsen, Oke Gerke, Lisbet B Larsen, Benjamin S B Rasmussen","doi":"10.1148/ryai.230529","DOIUrl":"https://doi.org/10.1148/ryai.230529","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249,402 mammograms from a representative screening population. A commercial AI system replaced the first reader (Scenario 1: Integrated AI<sub>first</sub>), the second reader (Scenario 2: Integrated AI<sub>second</sub>), or both readers for triaging of low- and high-risk cases (Integrated AI<sub>triage</sub>). AI threshold values were partly chosen based on previous validation and fixing screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, Integrated AI<sub>first</sub> showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%; <i>P</i> < .001). Integrated AI<sub>second</sub> had lower sensitivity (-1.58%; <i>P</i> < 0.001), negative predictive value (NPV) (- 0.01%; <i>P</i> < .001) and recall rate (< 0.06%; <i>P</i> = 0.04), but a higher positive predictive value (PPV) (+0.03%; <i>P</i> < .001) and arbitration rate (+1.22%; <i>P</i> < .001). Integrated AI<sub>triage</sub> achieved higher sensitivity (+1.33%; <i>P</i> < .001), PPV (+0.36%; <i>P</i> = .03), and NPV (+0.01%; <i>P</i> < .001) but lower arbitration rate (-0.88%; <i>P</i> < .001). Replacing one or both readers with AI seems feasible, however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142126863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels. 利用研究级标签训练的深度学习模型对头部 CT 扫描颅内出血进行图像级精确定位。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-28 DOI: 10.1148/ryai.230296
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill
{"title":"Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels.","authors":"Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill","doi":"10.1148/ryai.230296","DOIUrl":"https://doi.org/10.1148/ryai.230296","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image- level intracranial hemorrhage (ICH) using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level RSNA dataset and fine-tuned on a local dataset using attention-based bidirectional long-short-term memory networks. This local training dataset included 10,699 noncontrast head CT scans from 7469 patients with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: [84.0%, 87.4%]) and an AUC of 0.96 (95% CI: [0.96, 0.97]) on the held-out local test set (<i>n</i> = 7243, 3721 female) and 89.3% (95% CI: [87.8%, 90.7%]) and 0.96 (95% CI: [0.96, 0.97]), respectively, on the external test set (<i>n</i> = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (<i>P</i> < .05) diagnostic time of 5.04 seconds per scan (versus 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nn-Unet-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study. 基于 Nn-Unet 的多参数磁共振成像对小儿髓母细胞瘤肿瘤亚区的分割:一项多机构研究
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-21 DOI: 10.1148/ryai.230115
Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Raj Iyer, Peter de Blank, Pallavi Tiwari
{"title":"Nn-Unet-based Segmentation of Tumor Subcompartments in Pediatric Medulloblastoma Using Multiparametric MRI: A Multi-institutional Study.","authors":"Rohan Bareja, Marwa Ismail, Douglas Martin, Ameya Nayate, Ipsa Yadav, Murad Labbad, Prateek Dullur, Sanya Garg, Benita Tamrazi, Ralph Salloum, Ashley Margol, Alexander Judkins, Sukanya Raj Iyer, Peter de Blank, Pallavi Tiwari","doi":"10.1148/ryai.230115","DOIUrl":"https://doi.org/10.1148/ryai.230115","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate nn-Unet-based segmentation models for automated delineation of medulloblastoma (MB) tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2-18 years, with MB tumors from three different sites (28 from Hospital A, 18 from Hospital B, 32 from Hospital C), who had data from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, FLAIR) available. The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core + nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (1) transfer learning nn-Unet model was pretrained on an adult glioma cohort (<i>n</i> = 484) and fine-tuned on MB studies using Models Genesis, and (2) direct deep learning nn-Unet model was trained directly on the MB datasets, across five-fold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from 2 sites at a time used for training and data from the third site used for testing. Results Analysis on the 3 test sites yielded Dice scores of 0.81, 0.86, 0.86 and 0.80, 0.86, 0.85 for tumor habitat; 0.68, 0.84, 0.77 and 0.67, 0.83, 0.76 for enhancing tumor; 0.56, 0.71, 0.69 and 0.56, 0.71, 0.70 for edema; and 0.32, 0.48, 0.43 and 0.29, 0.44, 0.41 for cystic core + nonenhancing tumor for the transfer learning-and direct-nn-Unet models, respectively. The models were largely robust to site-specific variations. Conclusion nn-Unet segmentation models hold promise for accurate, robust automated delineation of MB tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric MB. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. 利用多部位双参数磁共振成像数据集,通过统一模型进行前列腺病变检测的基于深度学习的无监督领域适应。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-21 DOI: 10.1148/ryai.230521
Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou
{"title":"Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets.","authors":"Hao Li, Han Liu, Heinrich von Busch, Robert Grimm, Henkjan Huisman, Angela Tong, David Winkel, Tobias Penzkofer, Ivan Shabunin, Moon Hyung Choi, Qingsong Yang, Dieter Szolar, Steven Shea, Fergus Coakley, Mukesh Harisinghani, Ipek Oguz, Dorin Comaniciu, Ali Kamen, Bin Lou","doi":"10.1148/ryai.230521","DOIUrl":"https://doi.org/10.1148/ryai.230521","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite bp-MRI datasets. Materials and Methods This retrospective study included data from 5,150 patients (14,191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bp-MRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual DW images acquired using various b-values, to align with the style of images acquired using b-values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1,692 test cases (2,393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (<i>P</i> < .001), respectively, for PI-RADS ≥ 3, and 0.77 and 0.80 (<i>P</i> < .001) for PI-RADS ≥ 4 PCa lesions. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (<i>P</i> < .001) for PI-RADS ≥ 3, and 0.50 and 0.77 (<i>P</i> < .001) for PI-RADS ≥ 4 PCa lesions. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS recommended DWI protocol (eg, with an extremely high b-value). ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning. 通过对比学习提高胸片自动诊断的公平性
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-21 DOI: 10.1148/ryai.230342
Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng
{"title":"Improving Fairness of Automated Chest Radiograph Diagnosis by Contrastive Learning.","authors":"Mingquan Lin, Tianhao Li, Zhaoyi Sun, Gregory Holste, Ying Ding, Fei Wang, George Shih, Yifan Peng","doi":"10.1148/ryai.230342","DOIUrl":"https://doi.org/10.1148/ryai.230342","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop an artificial intelligence model that utilizes supervised contrastive learning to minimize bias in chest radiograph (CXR) diagnosis. Materials and Methods In this retrospective study, the proposed method was evaluated on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXRs from 27,796 patients collected as of April 20, 2023 for COVID-19 diagnosis, and the NIH Chest x-ray 14 (NIH-CXR) dataset with 112,120 CXRs from 30,805 patients collected between 1992 and 2015. In the NIH-CXR dataset, thoracic abnormalities included atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, or hernia. The proposed method utilized supervised contrastive learning with carefully selected positive and negative samples to generate fair image embeddings, which were fine-tuned for subsequent tasks to reduce bias in CXR diagnosis. The method was evaluated using the marginal area under the receiver operating characteristic curve (AUC) difference (ΔmAUC). Results The proposed model showed a significant decrease in bias across all subgroups compared with the baseline models, as evidenced by a paired T-test (<i>P</i> < .001). The ΔmAUCs obtained by the proposed method were 0.01 (95% CI, 0.01-0.01), 0.21 (95% CI, 0.21-0.21), and 0.10 (95% CI, 0.10-0.10) for sex, race, and age subgroups, respectively, on MIDRC, and 0.01 (95% CI, 0.01-0.01) and 0.05 (95% CI, 0.05-0.05) for sex and age subgroups, respectively, on NIH-CXR. Conclusion Employing supervised contrastive learning can mitigate bias in CXR diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor on Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma. 胶质母细胞瘤治疗前和治疗后多壳体弥散 MRI 上浸润性和增强型细胞肿瘤的深度学习分割
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-21 DOI: 10.1148/ryai.230489
Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie
{"title":"Deep Learning Segmentation of Infiltrative and Enhancing Cellular Tumor on Pre- and Posttreatment Multishell Diffusion MRI of Glioblastoma.","authors":"Louis Gagnon, Diviya Gupta, George Mastorakos, Nathan White, Vanessa Goodwill, Carrie R McDonald, Thomas Beaumont, Christopher Conlin, Tyler M Seibert, Uyen Nguyen, Jona Hattangadi-Gluth, Santosh Kesari, Jessica D Schulte, David Piccioni, Kathleen M Schmainda, Nikdokht Farid, Anders M Dale, Jeffrey D Rudie","doi":"10.1148/ryai.230489","DOIUrl":"10.1148/ryai.230489","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To develop and validate a deep learning (DL) method to detect and segment enhancing and nonenhancing cellular tumor on pre- and posttreatment MRI scans of patients with glioblastoma and to predict overall survival (OS) and progression-free survival (PFS). Materials and Methods This retrospective study included 1397 MRIs in 1297 patients with glioblastoma, including an internal cohort of 243 MRIs (January 2010-June 2022) for model training and cross-validation and four external test cohorts. Cellular tumor maps were segmented by two radiologists based on imaging, clinical history, and pathology. Multimodal MRI with perfusion and multishell diffusion imaging were inputted into a nnU-Net DL model to segment cellular tumor. Segmentation performance (Dice score) and performance in detecting recurrent tumor from posttreatment changes (area under the receiver operating characteristic curve [AUC]) were quantified. Model performance in predicting OS and PFS was assessed using Cox multivariable analysis. Results A cohort of 178 patients (mean age, 56 years ± [SD]13; 121 male, 57 female) with 243 MRI timepoints, as well as four external datasets with 55, 70, 610 and 419 MRI timepoints, respectively, were evaluated. The median Dice score was 0.79 (IQR:0.53-0.89) and the AUC for detecting residual/recurrent tumor was 0.84 (95% CI:0.79- 0.89). In the internal test set, estimated cellular tumor volume was significantly associated with OS (hazard ratio [HR] = 1.04/mL, <i>P</i> < .001) and PFS (HR = 1.04/mL, <i>P</i> < .001) when adjusting for age, sex and gross total resection status. In the external test sets, estimated cellular tumor volume was significantly associated with OS (HR = 1.01/mL, <i>P</i> < .001) when adjusting for age, sex and gross total resection status. Conclusion A DL model incorporating advanced imaging could accurately segment enhancing and nonenhancing cellular tumor, classify recurrent/residual tumor from posttreatment changes, and predict OS and PFS in patients with glioblastoma. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Computer-aided Detection for Digital Breast Tomosynthesis by Incorporating Temporal Change. 通过纳入时间变化改进数字乳腺断层合成的计算机辅助检测。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-08-14 DOI: 10.1148/ryai.230391
Yinhao Ren, Zisheng Liang, Jun Ge, Xiaoming Xu, Jonathan Go, Derek L Nguyen, Joseph Y Lo, Lars J Grimm
{"title":"Improving Computer-aided Detection for Digital Breast Tomosynthesis by Incorporating Temporal Change.","authors":"Yinhao Ren, Zisheng Liang, Jun Ge, Xiaoming Xu, Jonathan Go, Derek L Nguyen, Joseph Y Lo, Lars J Grimm","doi":"10.1148/ryai.230391","DOIUrl":"https://doi.org/10.1148/ryai.230391","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning algorithm that uses temporal information to improve the performance of a previously published framework of cancer lesion detection for digital breast tomosynthesis (DBT). Materials and Methods This retrospective study analyzed the current and the 1-year prior Hologic DBT screening examinations from 8 different institutions between 2016 to 2020. The dataset contained 973 cancer and 7123 noncancer cases. The front-end of this algorithm was an existing deep learning framework that performed singleview lesion detection followed by ipsilateral view matching. For this study, PriorNet was implemented as a cascaded deep learning module that used the additional growth information to refine the final probability of malignancy. Data from seven of the eight sites were used for training and validation, while the eighth site was reserved for external testing. Model performance was evaluated using localization receiver operating characteristic (ROC) curves. Results On the validation set, PriorNet showed an area under the ROC curve (AUC) of 0.931 (95% CI 0.930- 0.931), which outperformed both baseline models using single-view detection (AUC, 0.892 (95% CI 0.891-0.892), <i>P</i> < .001) and ipsilateral matching (AUC, 0.915 (95% CI 0.914-0.915), <i>P</i> < .001). On the external test set, PriorNet achieved an AUC of 0.896 (95% CI 0.885-0.896), outperforming both baselines (AUCs, 0.846 (95% CI 0.846-0.847, <i>P</i> < .001) and 0.865 (95% CI 0.865-0.866) <i>P</i> < .001, respectively). In the high sensitivity range of 0.9 to 1.0, the partial AUC of PriorNet was significantly higher (<i>P</i> < .001) relative to both baselines. Conclusion PriorNet using temporal information further improved the breast cancer detection performance of an existing DBT cancer detection framework. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports. 开源大语言模型从自由文本放射学报告中提取信息的性能。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230364
Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun
{"title":"Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports.","authors":"Bastien Le Guellec, Alexandre Lefèvre, Charlotte Geay, Lucas Shorten, Cyril Bruge, Lotfi Hacein-Bey, Philippe Amouyel, Jean-Pierre Pruvo, Gregory Kuchcinski, Aghiles Hamroun","doi":"10.1148/ryai.230364","DOIUrl":"10.1148/ryai.230364","url":null,"abstract":"<p><p>Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive emergency brain MRI reports written in 2022 from a French quaternary center were retrospectively reviewed. Two radiologists identified MRI scans that were performed in the emergency department for headaches. Four radiologists scored the reports' conclusions as either normal or abnormal. Abnormalities were labeled as either headache-causing or incidental. Vicuna (LMSYS Org), an open-source LLM, performed the same tasks. Vicuna's performance metrics were evaluated using the radiologists' consensus as the reference standard. Results Among the 2398 reports during the study period, radiologists identified 595 that included headaches in the indication (median age of patients, 35 years [IQR, 26-51 years]; 68% [403 of 595] women). A positive finding was reported in 227 of 595 (38%) cases, 136 of which could explain the headache. The LLM had a sensitivity of 98.0% (95% CI: 96.5, 99.0) and specificity of 99.3% (95% CI: 98.8, 99.7) for detecting the presence of headache in the clinical context, a sensitivity of 99.4% (95% CI: 98.3, 99.9) and specificity of 98.6% (95% CI: 92.2, 100.0) for the use of contrast medium injection, a sensitivity of 96.0% (95% CI: 92.5, 98.2) and specificity of 98.9% (95% CI: 97.2, 99.7) for study categorization as either normal or abnormal, and a sensitivity of 88.2% (95% CI: 81.6, 93.1) and specificity of 73% (95% CI: 62, 81) for causal inference between MRI findings and headache. Conclusion An open-source LLM was able to extract information from free-text radiology reports with excellent accuracy without requiring further training. <b>Keywords:</b> Large Language Model (LLM), Generative Pretrained Transformers (GPT), Open Source, Information Extraction, Report, Brain, MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Akinci D'Antonoli and Bluethgen in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort. 深度学习用于乳腺癌风险预测:应用于英国大型代表性筛查队列。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230431
Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren
{"title":"Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort.","authors":"Sam Ellis, Sandra Gomes, Matthew Trumble, Mark D Halling-Brown, Kenneth C Young, Nouman S Chaudhry, Peter Harris, Lucy M Warren","doi":"10.1148/ryai.230431","DOIUrl":"10.1148/ryai.230431","url":null,"abstract":"<p><p>Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (<i>n</i> = 89 285), validation (<i>n</i> = 2106), and test (<i>n</i> = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. <b>Keywords:</b> Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction <i>Supplemental material is available for this article.</i> ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. 医学影像人工智能检查表(CLAIM):2024 年更新。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240300
Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn
{"title":"Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.","authors":"Ali S Tejani, Michail E Klontzas, Anthony A Gatti, John T Mongan, Linda Moy, Seong Ho Park, Charles E Kahn","doi":"10.1148/ryai.240300","DOIUrl":"10.1148/ryai.240300","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11304031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信