Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.

IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto
{"title":"Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.","authors":"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto","doi":"10.1148/ryai.240206","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; <i>P</i> = .64) or sensitivity (85.9% versus 98.8%; <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240206"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm2). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (P = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; P = .64) or sensitivity (85.9% versus 98.8%; P = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.

深度学习应用于扩散加权成像,无需病灶分割即可区分恶性与良性乳腺肿瘤
"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校样审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 评估和比较不同人工智能(AI)模型在弥散加权成像(DWI)上区分良性和恶性乳腺肿瘤的性能,包括与放射科医生评估结果的比较。材料与方法 在这项回顾性研究中,乳腺病变患者在2019年5月至2022年3月期间接受了3T乳腺磁共振成像检查。除了 T1 加权成像、T2 加权成像和对比增强成像外,还采集了五个 b 值(0、200、800、1000 和 1500 s/mm2)的 DWI。DWI 数据分为训练集、调整集和测试集,用于开发和评估人工智能模型,包括小型 2D 卷积神经网络 (CNN)、ResNet18、EfficientNet-B0 和 3D CNN。将基于 DWI 的模型在区分良性和恶性乳腺肿瘤方面的性能与放射科医生评估标准乳腺 MRI 的性能进行了比较,并使用接收器操作特性分析对诊断性能进行了评估。研究还考察了数据增强对模型性能的影响(A:随机弹性变形;B:随机仿射变换/随机噪声;C:混杂)。结果 共分析了 293 名患者(平均年龄 [SD] 56.5 [15.1] 岁;均为女性)的 334 个乳腺病变。二维 CNN 模型在测试数据集上的表现优于三维 CNN(采用不同数据增强方法的接收者工作特征曲线下面积 [AUC]:0.83-0.88 对 0.75-0.76)。在测试数据集上,采用 A 和 B 增强方法的小型 2D CNN(AUC 0.88)与放射医师(AUC 0.86)的性能没有差异(P = .64)。在将小型二维 CNN 与放射医师进行比较时,没有证据表明两者在特异性(81.4% 对 72.1%;P = .64)或灵敏度(85.9% 对 98.8%;P = .64)方面存在差异。结论 人工智能模型,尤其是小型二维 CNN,在使用 DWI 区分乳腺恶性肿瘤和良性肿瘤方面表现出色,无需人工分割。©RSNA, 2024.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信