基于三维全乳腺超声的多任务深度学习可解释乳腺癌分子表达预测。

IF 4.1 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Zengan Huang, Xin Zhang, Yan Ju, Ge Zhang, Wanying Chang, Hongping Song, Yi Gao
{"title":"基于三维全乳腺超声的多任务深度学习可解释乳腺癌分子表达预测。","authors":"Zengan Huang, Xin Zhang, Yan Ju, Ge Zhang, Wanying Chang, Hongping Song, Yi Gao","doi":"10.1186/s13244-024-01810-9","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.</p><p><strong>Methods: </strong>The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.</p><p><strong>Results: </strong>All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.</p><p><strong>Conclusion: </strong>Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.</p><p><strong>Critical relevance statement: </strong>The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.</p><p><strong>Key points: </strong>Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"15 1","pages":"227"},"PeriodicalIF":4.1000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11424596/pdf/","citationCount":"0","resultStr":"{\"title\":\"Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound.\",\"authors\":\"Zengan Huang, Xin Zhang, Yan Ju, Ge Zhang, Wanying Chang, Hongping Song, Yi Gao\",\"doi\":\"10.1186/s13244-024-01810-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.</p><p><strong>Methods: </strong>The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.</p><p><strong>Results: </strong>All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.</p><p><strong>Conclusion: </strong>Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.</p><p><strong>Critical relevance statement: </strong>The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.</p><p><strong>Key points: </strong>Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.</p>\",\"PeriodicalId\":13639,\"journal\":{\"name\":\"Insights into Imaging\",\"volume\":\"15 1\",\"pages\":\"227\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2024-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11424596/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Insights into Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13244-024-01810-9\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-024-01810-9","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目标:以无创方式估算雌激素受体(ER)、孕激素受体(PR)和人表皮生长因子受体 2(HER2)这三种乳腺癌生物标记物,并提高其性能:无创估计雌激素受体(ER)、孕激素受体(PR)和人表皮生长因子受体2(HER2)三种乳腺癌生物标志物,并通过多任务深度学习提高性能和可解释性:研究纳入了2020年10月至2021年9月期间在西京医院接受三维全乳腺超声系统(3DWBUS)检查的388名乳腺癌患者。开发了单任务和多任务两种预测模型,前者预测生物标志物表达,后者将肿瘤分割与生物标志物预测相结合,以提高可解释性。性能评估包括单个和整体预测指标,并使用德隆检验进行性能比较。使用 Grad-CAM + + 技术对模型的关注区域进行可视化:所有患者被随机分成训练集(n = 240,62%)、验证集(n = 60,15%)和测试集(n = 88,23%)。在ER、PR和HER2表达预测的单项评估中,单任务和多任务模型在ER方面的AUC分别为0.809和0.735,在PR方面的AUC分别为0.688和0.767,在HER2方面的AUC分别为0.626和0.697。在整体评估中,多任务模型在测试集中表现出更优越的性能,宏观 AUC 达到 0.733,而单任务模型为 0.708。Grad-CAM + +方法显示,多任务模型更加关注病变组织区域,提高了模型工作的可解释性:两个模型都表现出令人印象深刻的性能,其中多任务模型在准确性方面表现出色,并提高了使用 Grad-CAM + + 技术的无创 3DWBUS 图像的可解释性:多任务深度学习模型对乳腺癌生物标记物进行了有效预测,提供了直接的生物标记物识别,并提高了临床可解释性,有可能提高靶向药物筛选的效率:肿瘤生物标志物是决定乳腺癌治疗的关键。多任务模型可提高预测性能,改善临床实践中的可解释性。基于三维全乳腺超声系统的深度学习模型在预测乳腺癌生物标志物方面表现出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound.

Objectives: To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.

Methods: The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.

Results: All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.

Conclusion: Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.

Critical relevance statement: The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.

Key points: Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Insights into Imaging
Insights into Imaging Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
7.30
自引率
4.30%
发文量
182
审稿时长
13 weeks
期刊介绍: Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere! I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe. Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy. A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field. I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly. The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members. The journal went open access in 2012, which means that all articles published since then are freely available online.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信