利用深度学习模型描述乳腺癌患者前哨淋巴结状态与放射科医生对灰度超声波和淋巴造影的分析比较

IF 0.7 4区 医学 Q4 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Ultrasound Quarterly Pub Date : 2024-07-03 eCollection Date: 2024-09-01 DOI:10.1097/RUQ.0000000000000683
Priscilla Machado, Aylin Tahmasebi, Samuel Fallon, Ji-Bin Liu, Basak E Dogan, Laurence Needleman, Melissa Lazar, Alliric I Willis, Kristin Brill, Susanna Nazarian, Adam Berger, Flemming Forsberg
{"title":"利用深度学习模型描述乳腺癌患者前哨淋巴结状态与放射科医生对灰度超声波和淋巴造影的分析比较","authors":"Priscilla Machado, Aylin Tahmasebi, Samuel Fallon, Ji-Bin Liu, Basak E Dogan, Laurence Needleman, Melissa Lazar, Alliric I Willis, Kristin Brill, Susanna Nazarian, Adam Berger, Flemming Forsberg","doi":"10.1097/RUQ.0000000000000683","DOIUrl":null,"url":null,"abstract":"<p><strong>Abstract: </strong>The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.</p>","PeriodicalId":49116,"journal":{"name":"Ultrasound Quarterly","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Characterizing Sentinel Lymph Node Status in Breast Cancer Patients Using a Deep-Learning Model Compared With Radiologists' Analysis of Grayscale Ultrasound and Lymphosonography.\",\"authors\":\"Priscilla Machado, Aylin Tahmasebi, Samuel Fallon, Ji-Bin Liu, Basak E Dogan, Laurence Needleman, Melissa Lazar, Alliric I Willis, Kristin Brill, Susanna Nazarian, Adam Berger, Flemming Forsberg\",\"doi\":\"10.1097/RUQ.0000000000000683\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Abstract: </strong>The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.</p>\",\"PeriodicalId\":49116,\"journal\":{\"name\":\"Ultrasound Quarterly\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ultrasound Quarterly\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/RUQ.0000000000000683\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/9/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q4\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasound Quarterly","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/RUQ.0000000000000683","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/1 0:00:00","PubModel":"eCollection","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

摘要:这项研究的目的是利用深度学习模型,与放射科医生的评估结果相比,区分乳腺癌患者前哨淋巴结(SLN)的良性和恶性。谷歌 AutoML 被用来开发图像分类模型。上传超声检查过程中获取的灰度和 CEUS 图像,数据分布为 80% 用于训练/20% 用于测试。使用的性能指标是精确度/调用曲线下面积(AuPRC)。此外,3 名放射科医生根据临床确定的分类方法评估 SLN 正常或异常。217 个 SLN 分成 2 个模型进行开发;模型 1 包括所有 SLN,模型 2 包括相同数量的良性和恶性 SLN。验证结果模型 1 AuPRC 0.84(灰度)/0.91(CEUS),模型 2 AuPRC 0.91(灰度)/0.87(CEUS)。人工智能(AI)与读者的比较显示,所有模型和超声模式之间都存在显著的统计学差异;模型 1 灰度 AI 与读者的比较,P = 0.047;模型 1 CEUS AI 与读者的比较,P < 0.001。总之,AutoML 在平衡容积数据集中显示出更高的诊断性能。总之,AutoML 提高了平衡容积数据集的诊断性能,放射医师的表现不受数据集分布的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Characterizing Sentinel Lymph Node Status in Breast Cancer Patients Using a Deep-Learning Model Compared With Radiologists' Analysis of Grayscale Ultrasound and Lymphosonography.

Abstract: The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ultrasound Quarterly
Ultrasound Quarterly RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.50
自引率
7.70%
发文量
105
审稿时长
>12 weeks
期刊介绍: Ultrasound Quarterly provides coverage of the newest, most sophisticated ultrasound techniques as well as in-depth analysis of important developments in this dynamic field. The journal publishes reviews of a wide variety of topics including trans-vaginal ultrasonography, detection of fetal anomalies, color Doppler flow imaging, pediatric ultrasonography, and breast sonography. Official Journal of the Society of Radiologists in Ultrasound
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信