Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan. 肺癌筛查中的辅助人工智能:美国和日本的一项多国回顾性研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230079
Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran
{"title":"Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan.","authors":"Atilla P Kiraly, Corbin A Cunningham, Ryan Najafi, Zaid Nabulsi, Jie Yang, Charles Lau, Joseph R Ledsam, Wenxing Ye, Diego Ardila, Scott M McKinney, Rory Pilgrim, Yun Liu, Hiroaki Saito, Yasuteru Shimamura, Mozziyar Etemadi, David Melnick, Sunny Jansen, Greg S Corrado, Lily Peng, Daniel Tse, Shravya Shetty, Shruthi Prabhakara, David P Naidich, Neeral Beladia, Krish Eswaran","doi":"10.1148/ryai.230079","DOIUrl":"10.1148/ryai.230079","url":null,"abstract":"<p><p>Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; <i>P</i> = .02) for the U.S. study and by 0.023 (0.93 to 0.96; <i>P</i> = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; <i>P</i> < .001) for the U.S. study and 6.7% (23% to 30%; <i>P</i> < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; <i>P</i> = .88) and Japan (98% to 100%; <i>P</i> > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. <b>Keywords:</b> Assistive Artificial Intelligence, Lung Cancer Screening, CT <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230079"},"PeriodicalIF":8.1,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia. 赞比亚临床肺炎患儿肺部 US 图像的编辑和注释数据集。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230147
Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon
{"title":"Curated and Annotated Dataset of Lung US Images in Zambian Children with Clinical Pneumonia.","authors":"Lauren Etter, Margrit Betke, Ingrid Y Camelo, Christopher J Gill, Rachel Pieciak, Russell Thompson, Libertario Demi, Umair Khan, Alyse Wheelock, Janet Katanga, Bindu N Setty, Ilse Castro-Aragon","doi":"10.1148/ryai.230147","DOIUrl":"10.1148/ryai.230147","url":null,"abstract":"<p><p>See also the commentary by Sitek in this issue. <i>Supplemental material is available for this article.</i></p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230147"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Large Language Models for Detection of Speech Recognition Errors in Radiology Reports. 生成大型语言模型,用于检测放射学报告中的语音识别错误。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230205
Reuben A Schmidt, Jarrel C Y Seah, Ke Cao, Lincoln Lim, Wei Lim, Justin Yeung
{"title":"Generative Large Language Models for Detection of Speech Recognition Errors in Radiology Reports.","authors":"Reuben A Schmidt, Jarrel C Y Seah, Ke Cao, Lincoln Lim, Wei Lim, Justin Yeung","doi":"10.1148/ryai.230205","DOIUrl":"10.1148/ryai.230205","url":null,"abstract":"<p><p>This study evaluated the ability of generative large language models (LLMs) to detect speech recognition errors in radiology reports. A dataset of 3233 CT and MRI reports was assessed by radiologists for speech recognition errors. Errors were categorized as clinically significant or not clinically significant. Performances of five generative LLMs-GPT-3.5-turbo, GPT-4, text-davinci-003, Llama-v2-70B-chat, and Bard-were compared in detecting these errors, using manual error detection as the reference standard. Prompt engineering was used to optimize model performance. GPT-4 demonstrated high accuracy in detecting clinically significant errors (precision, 76.9%; recall, 100%; F1 score, 86.9%) and not clinically significant errors (precision, 93.9%; recall, 94.7%; F1 score, 94.3%). Text-davinci-003 achieved F1 scores of 72% and 46.6% for clinically significant and not clinically significant errors, respectively. GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while Llama-v2-70B-chat scored 72.8% and 47.7%. Bard showed the lowest accuracy, with F1 scores of 47.5% and 20.9%. GPT-4 effectively identified challenging errors of nonsense phrases and internally inconsistent statements. Longer reports, resident dictation, and overnight shifts were associated with higher error rates. In conclusion, advanced generative LLMs show potential for automatic detection of speech recognition errors in radiology reports. <b>Keywords:</b> CT, Large Language Model, Machine Learning, MRI, Natural Language Processing, Radiology Reports, Speech, Unsupervised Learning <i>Supplemental material is available for this article</i>.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230205"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982816/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Manuscript Reviewers: A Note of Thanks. 2023 审稿人:感谢信。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.240138
Curtis P Langlotz, Charles E Kahn
{"title":"2023 Manuscript Reviewers: A Note of Thanks.","authors":"Curtis P Langlotz, Charles E Kahn","doi":"10.1148/ryai.240138","DOIUrl":"10.1148/ryai.240138","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240138"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982905/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor's Recognition Awards. 编辑表彰奖。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.240139
Charles E Kahn
{"title":"Editor's Recognition Awards.","authors":"Charles E Kahn","doi":"10.1148/ryai.240139","DOIUrl":"10.1148/ryai.240139","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240139"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicenter Evaluation of a Weakly Supervised Deep Learning Model for Lymph Node Diagnosis in Rectal Cancer at MRI. 多中心评估磁共振成像上直肠癌淋巴结诊断的弱监督深度学习模型
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230152
Wei Xia, Dandan Li, Wenguang He, Perry J Pickhardt, Junming Jian, Rui Zhang, Junjie Zhang, Ruirui Song, Tong Tong, Xiaotang Yang, Xin Gao, Yanfen Cui
{"title":"Multicenter Evaluation of a Weakly Supervised Deep Learning Model for Lymph Node Diagnosis in Rectal Cancer at MRI.","authors":"Wei Xia, Dandan Li, Wenguang He, Perry J Pickhardt, Junming Jian, Rui Zhang, Junjie Zhang, Ruirui Song, Tong Tong, Xiaotang Yang, Xin Gao, Yanfen Cui","doi":"10.1148/ryai.230152","DOIUrl":"10.1148/ryai.230152","url":null,"abstract":"<p><p>Purpose To develop a Weakly supervISed model DevelOpment fraMework (WISDOM) model to construct a lymph node (LN) diagnosis model for patients with rectal cancer (RC) that uses preoperative MRI data coupled with postoperative patient-level pathologic information. Materials and Methods In this retrospective study, the WISDOM model was built using MRI (T2-weighted and diffusion-weighted imaging) and patient-level pathologic information (the number of postoperatively confirmed metastatic LNs and resected LNs) based on the data of patients with RC between January 2016 and November 2017. The incremental value of the model in assisting radiologists was investigated. The performances in binary and ternary N staging were evaluated using area under the receiver operating characteristic curve (AUC) and the concordance index (C index), respectively. Results A total of 1014 patients (median age, 62 years; IQR, 54-68 years; 590 male) were analyzed, including the training cohort (<i>n</i> = 589) and internal test cohort (<i>n</i> = 146) from center 1 and two external test cohorts (cohort 1: 117; cohort 2: 162) from centers 2 and 3. The WISDOM model yielded an overall AUC of 0.81 and C index of 0.765, significantly outperforming junior radiologists (AUC = 0.69, <i>P</i> < .001; C index = 0.689, <i>P</i> < .001) and performing comparably with senior radiologists (AUC = 0.79, <i>P</i> = .21; C index = 0.788, <i>P</i> = .22). Moreover, the model significantly improved the performance of junior radiologists (AUC = 0.80, <i>P</i> < .001; C index = 0.798, <i>P</i> < .001) and senior radiologists (AUC = 0.88, <i>P</i> < .001; C index = 0.869, <i>P</i> < .001). Conclusion This study demonstrates the potential of WISDOM as a useful LN diagnosis method using routine rectal MRI data. The improved radiologist performance observed with model assistance highlights the potential clinical utility of WISDOM in practice. <b>Keywords:</b> MR Imaging, Abdomen/GI, Rectum, Computer Applications-Detection/Diagnosis <i>Supplemental material is available for this article</i>. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230152"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer. 通过癌症中的机器学习识别用于人居计算的精确 3D CT 放射线组学。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230118
Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez
{"title":"Identification of Precise 3D CT Radiomics for Habitat Computation by Machine Learning in Cancer.","authors":"Olivia Prior, Carlos Macarro, Víctor Navarro, Camilo Monreal, Marta Ligero, Alonso Garcia-Ruiz, Garazi Serna, Sara Simonetti, Irene Braña, Maria Vieito, Manuel Escobar, Jaume Capdevila, Annette T Byrne, Rodrigo Dienstmann, Rodrigo Toledo, Paolo Nuciforo, Elena Garralda, Francesco Grussu, Kinga Bernatowicz, Raquel Perez-Lopez","doi":"10.1148/ryai.230118","DOIUrl":"10.1148/ryai.230118","url":null,"abstract":"<p><p>Purpose To identify precise three-dimensional radiomics features in CT images that enable computation of stable and biologically meaningful habitats with machine learning for cancer heterogeneity assessment. Materials and Methods This retrospective study included 2436 liver or lung lesions from 605 CT scans (November 2010-December 2021) in 331 patients with cancer (mean age, 64.5 years ± 10.1 [SD]; 185 male patients). Three-dimensional radiomics were computed from original and perturbed (simulated retest) images with different combinations of feature computation kernel radius and bin size. The lower 95% confidence limit (LCL) of the intraclass correlation coefficient (ICC) was used to measure repeatability and reproducibility. Precise features were identified by combining repeatability and reproducibility results (LCL of ICC ≥ 0.50). Habitats were obtained with Gaussian mixture models in original and perturbed data using precise radiomics features and compared with habitats obtained using all features. The Dice similarity coefficient (DSC) was used to assess habitat stability. Biologic correlates of CT habitats were explored in a case study, with a cohort of 13 patients with CT, multiparametric MRI, and tumor biopsies. Results Three-dimensional radiomics showed poor repeatability (LCL of ICC: median [IQR], 0.442 [0.312-0.516]) and poor reproducibility against kernel radius (LCL of ICC: median [IQR], 0.440 [0.33-0.526]) but excellent reproducibility against bin size (LCL of ICC: median [IQR], 0.929 [0.853-0.988]). Twenty-six radiomics features were precise, differing in lung and liver lesions. Habitats obtained with precise features (DSC: median [IQR], 0.601 [0.494-0.712] and 0.651 [0.52-0.784] for lung and liver lesions, respectively) were more stable than those obtained with all features (DSC: median [IQR], 0.532 [0.424-0.637] and 0.587 [0.465-0.703] for lung and liver lesions, respectively; <i>P</i> < .001). In the case study, CT habitats correlated quantitatively and qualitatively with heterogeneity observed in multiparametric MRI habitats and histology. Conclusion Precise three-dimensional radiomics features were identified on CT images that enabled tumor heterogeneity assessment through stable tumor habitat computation. <b>Keywords:</b> CT, Diffusion-weighted Imaging, Dynamic Contrast-enhanced MRI, MRI, Radiomics, Unsupervised Learning, Oncology, Liver, Lung <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also the commentary by Sagreiya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230118"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence in Radiology: Bridging Global Health Care Gaps through Innovation and Inclusion. 放射学中的人工智能:通过创新和包容弥合全球医疗差距。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.240093
Arkadiusz Sitek
{"title":"Artificial Intelligence in Radiology: Bridging Global Health Care Gaps through Innovation and Inclusion.","authors":"Arkadiusz Sitek","doi":"10.1148/ryai.240093","DOIUrl":"10.1148/ryai.240093","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e240093"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can AI Predict the Need for Surgery in Traumatic Brain Injury? 人工智能能否预测创伤性脑损伤患者的手术需求?
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230587
Sven Haller
{"title":"Can AI Predict the Need for Surgery in Traumatic Brain Injury?","authors":"Sven Haller","doi":"10.1148/ryai.230587","DOIUrl":"10.1148/ryai.230587","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 2","pages":"e230587"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Denoising Multiphase Functional Cardiac CT Angiography Using Deep Learning and Synthetic Data. 利用深度学习和合成数据对多相功能性心脏 CT 血管造影进行去噪。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-03-01 DOI: 10.1148/ryai.230153
Veit Sandfort, Martin J Willemink, Marina Codari, Domenico Mastrodicasa, Dominik Fleischmann
{"title":"Denoising Multiphase Functional Cardiac CT Angiography Using Deep Learning and Synthetic Data.","authors":"Veit Sandfort, Martin J Willemink, Marina Codari, Domenico Mastrodicasa, Dominik Fleischmann","doi":"10.1148/ryai.230153","DOIUrl":"10.1148/ryai.230153","url":null,"abstract":"<p><p>Coronary CT angiography is increasingly used for cardiac diagnosis. Dose modulation techniques can reduce radiation dose, but resulting functional images are noisy and challenging for functional analysis. This retrospective study describes and evaluates a deep learning method for denoising functional cardiac imaging, taking advantage of multiphase information in a three-dimensional convolutional neural network. Coronary CT angiograms (<i>n</i> = 566) were used to derive synthetic data for training. Deep learning-based image denoising was compared with unprocessed images and a standard noise reduction algorithm (block-matching and three-dimensional filtering [BM3D]). Noise and signal-to-noise ratio measurements, as well as expert evaluation of image quality, were performed. To validate the use of the denoised images for cardiac quantification, threshold-based segmentation was performed, and results were compared with manual measurements on unprocessed images. Deep learning-based denoised images showed significantly improved noise compared with standard denoising-based images (SD of left ventricular blood pool, 20.3 HU ± 42.5 [SD] vs 33.4 HU ± 39.8 for deep learning-based image denoising vs BM3D; <i>P</i> < .0001). Expert evaluations of image quality were significantly higher in deep learning-based denoised images compared with standard denoising. Semiautomatic left ventricular size measurements on deep learning-based denoised images showed excellent correlation with expert quantification on unprocessed images (intraclass correlation coefficient, 0.97). Deep learning-based denoising using a three-dimensional approach resulted in excellent denoising performance and facilitated valid automatic processing of cardiac functional imaging. <b>Keywords:</b> Cardiac CT Angiography, Deep Learning, Image Denoising <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230153"},"PeriodicalIF":9.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信