A hybrid deep learning model for mammographic breast cancer detection: Multi-autoencoder and attention mechanisms

IF 1.7 4区 综合性期刊 Q2 MULTIDISCIPLINARY SCIENCES
Long Jun Yan , Lei Wu , Meng Xia , Lan He
{"title":"A hybrid deep learning model for mammographic breast cancer detection: Multi-autoencoder and attention mechanisms","authors":"Long Jun Yan ,&nbsp;Lei Wu ,&nbsp;Meng Xia ,&nbsp;Lan He","doi":"10.1016/j.jrras.2025.101578","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This study aims to develop a robust diagnostic framework for breast cancer detection in mammographic images by integrating multi-autoencoder-based feature extraction with attention mechanisms. The objective is to address key limitations in traditional and state-of-the-art methods, including limited adaptability, manual feature dependency, and lack of interpretability, ensuring enhanced diagnostic accuracy and clinical utility.</div></div><div><h3>Materials and methods</h3><div>This study utilizes a multi-center dataset of 5987 mammograms (malignant: 36 %, benign: 31.9 %, normal: 32.1 %). Images were standardized to 256 × 256 pixels, with intensity normalization and augmentation. A multi-autoencoder framework with six independently pre-trained autoencoders extracted diagnostic features. Recursive Feature Elimination (RFE) with XGBoost was applied for feature selection. Attention mechanisms prioritized diagnostically significant regions. Classification performance was evaluated using accuracy, sensitivity, specificity, F1-score, and AUC-ROC, while segmentation was assessed using IoU, Dice score, and localization accuracy. Five-fold cross-validation ensured robustness, and Adam optimizer with early stopping was used for optimal model training.</div></div><div><h3>Results</h3><div>The proposed framework demonstrates high accuracy in both segmentation and classification for breast cancer detection. Attention-based segmentation achieved 91.5 % localization accuracy, with IoU = 0.87 and a Dice score of 0.89, ensuring precise identification of diagnostic regions. The multi-autoencoder classification model attained 94.2 % sensitivity and 96.4 % AUC in training, with 92.4 % sensitivity and 95.8 % AUC on independent testing, outperforming traditional statistical features. XGBoost surpassed other classifiers, including Random Forest, SVM, and Logistic Regression. These results validate the model's robustness, interpretability, and clinical applicability, establishing an AI-driven diagnostic tool for accurate breast cancer segmentation and classification.</div></div><div><h3>Conclusions</h3><div>The proposed framework advances breast cancer detection by offering high accuracy, adaptability, and interpretability. Future work should explore multimodal imaging integration and lightweight implementations for real-time deployment in clinical environments.</div></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":"18 3","pages":"Article 101578"},"PeriodicalIF":1.7000,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850725002900","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

This study aims to develop a robust diagnostic framework for breast cancer detection in mammographic images by integrating multi-autoencoder-based feature extraction with attention mechanisms. The objective is to address key limitations in traditional and state-of-the-art methods, including limited adaptability, manual feature dependency, and lack of interpretability, ensuring enhanced diagnostic accuracy and clinical utility.

Materials and methods

This study utilizes a multi-center dataset of 5987 mammograms (malignant: 36 %, benign: 31.9 %, normal: 32.1 %). Images were standardized to 256 × 256 pixels, with intensity normalization and augmentation. A multi-autoencoder framework with six independently pre-trained autoencoders extracted diagnostic features. Recursive Feature Elimination (RFE) with XGBoost was applied for feature selection. Attention mechanisms prioritized diagnostically significant regions. Classification performance was evaluated using accuracy, sensitivity, specificity, F1-score, and AUC-ROC, while segmentation was assessed using IoU, Dice score, and localization accuracy. Five-fold cross-validation ensured robustness, and Adam optimizer with early stopping was used for optimal model training.

Results

The proposed framework demonstrates high accuracy in both segmentation and classification for breast cancer detection. Attention-based segmentation achieved 91.5 % localization accuracy, with IoU = 0.87 and a Dice score of 0.89, ensuring precise identification of diagnostic regions. The multi-autoencoder classification model attained 94.2 % sensitivity and 96.4 % AUC in training, with 92.4 % sensitivity and 95.8 % AUC on independent testing, outperforming traditional statistical features. XGBoost surpassed other classifiers, including Random Forest, SVM, and Logistic Regression. These results validate the model's robustness, interpretability, and clinical applicability, establishing an AI-driven diagnostic tool for accurate breast cancer segmentation and classification.

Conclusions

The proposed framework advances breast cancer detection by offering high accuracy, adaptability, and interpretability. Future work should explore multimodal imaging integration and lightweight implementations for real-time deployment in clinical environments.
乳房x线摄影乳腺癌检测的混合深度学习模型:多自编码器和注意机制
目的将基于多自编码器的特征提取与注意机制相结合,为乳房x线摄影图像中的乳腺癌检测建立一个强大的诊断框架。目标是解决传统和最先进方法的关键局限性,包括有限的适应性,手动特征依赖性和缺乏可解释性,确保提高诊断准确性和临床实用性。材料和方法本研究利用5987张乳房x线照片的多中心数据集(恶性:36%,良性:31.9%,正常:32.1%)。将图像标准化为256 × 256像素,并进行强度归一化和增强。一个具有六个独立预训练的自编码器的多自编码器框架提取诊断特征。采用XGBoost递归特征消除(RFE)进行特征选择。注意机制优先考虑诊断上重要的区域。通过准确性、敏感性、特异性、f1评分和AUC-ROC评估分类效果,通过IoU、Dice评分和定位准确性评估分割效果。五重交叉验证保证了鲁棒性,采用Adam优化器进行早期停止的最优模型训练。结果该框架在乳腺癌检测中具有较高的分割和分类准确率。基于注意力的分割实现了91.5%的定位准确率,IoU = 0.87, Dice评分为0.89,保证了诊断区域的准确识别。多自编码器分类模型训练灵敏度为94.2%,AUC为96.4%,独立测试灵敏度为92.4%,AUC为95.8%,优于传统统计特征。XGBoost超越了其他分类器,包括随机森林、支持向量机和逻辑回归。这些结果验证了模型的稳健性、可解释性和临床适用性,建立了一个人工智能驱动的诊断工具,用于准确的乳腺癌分割和分类。结论该框架具有较高的准确性、适应性和可解释性,促进了乳腺癌的检测。未来的工作应该探索多模式成像集成和轻量级实现,以便在临床环境中实时部署。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
5.90%
发文量
130
审稿时长
16 weeks
期刊介绍: Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信