Yan Fu, Huang Jing Chen, Hao Zhang, Dong Jie Liu, Xi Chen, Cheng Yu Qiu, Wen Wu Lu, Hao Miao Bai, Qiu Wei Li, Guo Xue Li, Zi Jun Shen, Chang Jiang Gu, Yuan Peng Zhang, Xue Jun Ni
{"title":"整合多模态超声成像和机器学习预测腔内和非腔内乳腺癌亚型。","authors":"Yan Fu, Huang Jing Chen, Hao Zhang, Dong Jie Liu, Xi Chen, Cheng Yu Qiu, Wen Wu Lu, Hao Miao Bai, Qiu Wei Li, Guo Xue Li, Zi Jun Shen, Chang Jiang Gu, Yuan Peng Zhang, Xue Jun Ni","doi":"10.3389/fonc.2025.1558880","DOIUrl":null,"url":null,"abstract":"<p><strong>Rationale and objectives: </strong>Breast cancer molecular subtypes significantly influence treatment outcomes and prognoses, necessitating precise differentiation to tailor individualized therapies. This study leverages multimodal ultrasound imaging combined with machine learning to preoperatively classify luminal and non-luminal subtypes, aiming to enhance diagnostic accuracy and clinical decision-making.</p><p><strong>Methods: </strong>This retrospective study included 247 patients with breast cancer, with 192 meeting the inclusion criteria. Patients were randomly divided into a training set (134 cases) and a validation set (58 cases) in a 7:3 ratio. Image segmentation was conducted using 3D Slicer software, adhering to IBSI-standardized radiomics feature extraction. We constructed four model configurations-monomodal, dual-modal, trimodal, and four-modal-through optimized feature selection. These included monomodal datasets comprising 2D ultrasound (US) images, dual-modal datasets integrating 2D US with color Doppler flow imaging (CDFI) (US+CDFI), trimodal datasets incorporating strain elastography (SE) alongside 2D US and CDFI (US+CDFI+SE), and four-modal datasets combining all modalities, including ABVS coronal imaging (US+CDFI+SE+ABVS). Machine learning classifiers such as logistic regression (LR), support vector machines (SVM), AdaBoost (adaptive boosting), random forests(RF), linear discriminant analysis(LDA), and ridge regression were utilized.</p><p><strong>Results: </strong>The four-modal model achieved the highest performance (AUC: 0.947, 95% CI: 0.884-0.986), significantly outperforming the monomodal model (AUC 0.758, ΔAUC +0.189). Multimodal integration progressively enhanced performance: trimodal models surpassed dual-modal and monomodal approaches (AUC 0.865 vs 0.741 and 0.758), and the four-modal framework showed marked improvements in sensitivity (88.4% vs 71.1% for monomodal), specificity (92.7% vs 70.1%), and F1 scores (0.905).</p><p><strong>Conclusion: </strong>This study establishes a multimodal machine learning model integrating advanced ultrasound imaging techniques to preoperatively distinguish luminal from non-luminal breast cancers. The model demonstrates significant potential to improve diagnostic accuracy and generalization, representing a notable advancement in non-invasive breast cancer diagnostics.</p>","PeriodicalId":12482,"journal":{"name":"Frontiers in Oncology","volume":"15 ","pages":"1558880"},"PeriodicalIF":3.5000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541423/pdf/","citationCount":"0","resultStr":"{\"title\":\"Integrating multimodal ultrasound imaging and machine learning for predicting luminal and non-luminal breast cancer subtypes.\",\"authors\":\"Yan Fu, Huang Jing Chen, Hao Zhang, Dong Jie Liu, Xi Chen, Cheng Yu Qiu, Wen Wu Lu, Hao Miao Bai, Qiu Wei Li, Guo Xue Li, Zi Jun Shen, Chang Jiang Gu, Yuan Peng Zhang, Xue Jun Ni\",\"doi\":\"10.3389/fonc.2025.1558880\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Rationale and objectives: </strong>Breast cancer molecular subtypes significantly influence treatment outcomes and prognoses, necessitating precise differentiation to tailor individualized therapies. This study leverages multimodal ultrasound imaging combined with machine learning to preoperatively classify luminal and non-luminal subtypes, aiming to enhance diagnostic accuracy and clinical decision-making.</p><p><strong>Methods: </strong>This retrospective study included 247 patients with breast cancer, with 192 meeting the inclusion criteria. Patients were randomly divided into a training set (134 cases) and a validation set (58 cases) in a 7:3 ratio. Image segmentation was conducted using 3D Slicer software, adhering to IBSI-standardized radiomics feature extraction. We constructed four model configurations-monomodal, dual-modal, trimodal, and four-modal-through optimized feature selection. These included monomodal datasets comprising 2D ultrasound (US) images, dual-modal datasets integrating 2D US with color Doppler flow imaging (CDFI) (US+CDFI), trimodal datasets incorporating strain elastography (SE) alongside 2D US and CDFI (US+CDFI+SE), and four-modal datasets combining all modalities, including ABVS coronal imaging (US+CDFI+SE+ABVS). Machine learning classifiers such as logistic regression (LR), support vector machines (SVM), AdaBoost (adaptive boosting), random forests(RF), linear discriminant analysis(LDA), and ridge regression were utilized.</p><p><strong>Results: </strong>The four-modal model achieved the highest performance (AUC: 0.947, 95% CI: 0.884-0.986), significantly outperforming the monomodal model (AUC 0.758, ΔAUC +0.189). Multimodal integration progressively enhanced performance: trimodal models surpassed dual-modal and monomodal approaches (AUC 0.865 vs 0.741 and 0.758), and the four-modal framework showed marked improvements in sensitivity (88.4% vs 71.1% for monomodal), specificity (92.7% vs 70.1%), and F1 scores (0.905).</p><p><strong>Conclusion: </strong>This study establishes a multimodal machine learning model integrating advanced ultrasound imaging techniques to preoperatively distinguish luminal from non-luminal breast cancers. The model demonstrates significant potential to improve diagnostic accuracy and generalization, representing a notable advancement in non-invasive breast cancer diagnostics.</p>\",\"PeriodicalId\":12482,\"journal\":{\"name\":\"Frontiers in Oncology\",\"volume\":\"15 \",\"pages\":\"1558880\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541423/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Oncology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3389/fonc.2025.1558880\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fonc.2025.1558880","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
理由和目的:乳腺癌分子亚型显著影响治疗结果和预后,需要精确区分以定制个性化治疗。本研究利用多模态超声成像结合机器学习对腔内和非腔内亚型进行术前分类,旨在提高诊断准确性和临床决策。方法:对247例乳腺癌患者进行回顾性研究,其中192例符合纳入标准。将患者按7:3的比例随机分为训练组(134例)和验证组(58例)。使用3D Slicer软件进行图像分割,遵循ibsi标准化放射组学特征提取。通过优化特征选择,构建了单模态、双模态、三模态和四模态四种模型配置。其中包括包含2D超声(US)图像的单模态数据集,将2D超声与彩色多普勒血流成像(CDFI) (US+CDFI)集成的双模态数据集,将应变弹性成像(SE)与2D超声和CDFI (US+CDFI+SE)结合的三模态数据集,以及结合所有模式的四模态数据集,包括ABVS冠状成像(US+CDFI+SE+ABVS)。机器学习分类器,如逻辑回归(LR)、支持向量机(SVM)、AdaBoost(自适应增强)、随机森林(RF)、线性判别分析(LDA)和脊回归。结果:四模态模型获得了最高的性能(AUC: 0.947, 95% CI: 0.884-0.986),显著优于单模态模型(AUC 0.758, ΔAUC +0.189)。多模态整合逐步提高了性能:三模态模型优于双模态和单模态方法(AUC分别为0.865 vs 0.741和0.758),四模态框架在敏感性(88.4% vs 71.1%)、特异性(92.7% vs 70.1%)和F1评分(0.905)方面均有显著改善。结论:本研究建立了一种多模态机器学习模型,结合先进的超声成像技术,可用于术前区分腔内和非腔内乳腺癌。该模型在提高诊断准确性和泛化方面显示出显著的潜力,代表了非侵入性乳腺癌诊断的显著进步。
Integrating multimodal ultrasound imaging and machine learning for predicting luminal and non-luminal breast cancer subtypes.
Rationale and objectives: Breast cancer molecular subtypes significantly influence treatment outcomes and prognoses, necessitating precise differentiation to tailor individualized therapies. This study leverages multimodal ultrasound imaging combined with machine learning to preoperatively classify luminal and non-luminal subtypes, aiming to enhance diagnostic accuracy and clinical decision-making.
Methods: This retrospective study included 247 patients with breast cancer, with 192 meeting the inclusion criteria. Patients were randomly divided into a training set (134 cases) and a validation set (58 cases) in a 7:3 ratio. Image segmentation was conducted using 3D Slicer software, adhering to IBSI-standardized radiomics feature extraction. We constructed four model configurations-monomodal, dual-modal, trimodal, and four-modal-through optimized feature selection. These included monomodal datasets comprising 2D ultrasound (US) images, dual-modal datasets integrating 2D US with color Doppler flow imaging (CDFI) (US+CDFI), trimodal datasets incorporating strain elastography (SE) alongside 2D US and CDFI (US+CDFI+SE), and four-modal datasets combining all modalities, including ABVS coronal imaging (US+CDFI+SE+ABVS). Machine learning classifiers such as logistic regression (LR), support vector machines (SVM), AdaBoost (adaptive boosting), random forests(RF), linear discriminant analysis(LDA), and ridge regression were utilized.
Results: The four-modal model achieved the highest performance (AUC: 0.947, 95% CI: 0.884-0.986), significantly outperforming the monomodal model (AUC 0.758, ΔAUC +0.189). Multimodal integration progressively enhanced performance: trimodal models surpassed dual-modal and monomodal approaches (AUC 0.865 vs 0.741 and 0.758), and the four-modal framework showed marked improvements in sensitivity (88.4% vs 71.1% for monomodal), specificity (92.7% vs 70.1%), and F1 scores (0.905).
Conclusion: This study establishes a multimodal machine learning model integrating advanced ultrasound imaging techniques to preoperatively distinguish luminal from non-luminal breast cancers. The model demonstrates significant potential to improve diagnostic accuracy and generalization, representing a notable advancement in non-invasive breast cancer diagnostics.
期刊介绍:
Cancer Imaging and Diagnosis is dedicated to the publication of results from clinical and research studies applied to cancer diagnosis and treatment. The section aims to publish studies from the entire field of cancer imaging: results from routine use of clinical imaging in both radiology and nuclear medicine, results from clinical trials, experimental molecular imaging in humans and small animals, research on new contrast agents in CT, MRI, ultrasound, publication of new technical applications and processing algorithms to improve the standardization of quantitative imaging and image guided interventions for the diagnosis and treatment of cancer.