{"title":"A hybrid deep learning model for mammographic breast cancer detection: Multi-autoencoder and attention mechanisms","authors":"Long Jun Yan , Lei Wu , Meng Xia , Lan He","doi":"10.1016/j.jrras.2025.101578","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This study aims to develop a robust diagnostic framework for breast cancer detection in mammographic images by integrating multi-autoencoder-based feature extraction with attention mechanisms. The objective is to address key limitations in traditional and state-of-the-art methods, including limited adaptability, manual feature dependency, and lack of interpretability, ensuring enhanced diagnostic accuracy and clinical utility.</div></div><div><h3>Materials and methods</h3><div>This study utilizes a multi-center dataset of 5987 mammograms (malignant: 36 %, benign: 31.9 %, normal: 32.1 %). Images were standardized to 256 × 256 pixels, with intensity normalization and augmentation. A multi-autoencoder framework with six independently pre-trained autoencoders extracted diagnostic features. Recursive Feature Elimination (RFE) with XGBoost was applied for feature selection. Attention mechanisms prioritized diagnostically significant regions. Classification performance was evaluated using accuracy, sensitivity, specificity, F1-score, and AUC-ROC, while segmentation was assessed using IoU, Dice score, and localization accuracy. Five-fold cross-validation ensured robustness, and Adam optimizer with early stopping was used for optimal model training.</div></div><div><h3>Results</h3><div>The proposed framework demonstrates high accuracy in both segmentation and classification for breast cancer detection. Attention-based segmentation achieved 91.5 % localization accuracy, with IoU = 0.87 and a Dice score of 0.89, ensuring precise identification of diagnostic regions. The multi-autoencoder classification model attained 94.2 % sensitivity and 96.4 % AUC in training, with 92.4 % sensitivity and 95.8 % AUC on independent testing, outperforming traditional statistical features. XGBoost surpassed other classifiers, including Random Forest, SVM, and Logistic Regression. These results validate the model's robustness, interpretability, and clinical applicability, establishing an AI-driven diagnostic tool for accurate breast cancer segmentation and classification.</div></div><div><h3>Conclusions</h3><div>The proposed framework advances breast cancer detection by offering high accuracy, adaptability, and interpretability. Future work should explore multimodal imaging integration and lightweight implementations for real-time deployment in clinical environments.</div></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":"18 3","pages":"Article 101578"},"PeriodicalIF":1.7000,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850725002900","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
This study aims to develop a robust diagnostic framework for breast cancer detection in mammographic images by integrating multi-autoencoder-based feature extraction with attention mechanisms. The objective is to address key limitations in traditional and state-of-the-art methods, including limited adaptability, manual feature dependency, and lack of interpretability, ensuring enhanced diagnostic accuracy and clinical utility.
Materials and methods
This study utilizes a multi-center dataset of 5987 mammograms (malignant: 36 %, benign: 31.9 %, normal: 32.1 %). Images were standardized to 256 × 256 pixels, with intensity normalization and augmentation. A multi-autoencoder framework with six independently pre-trained autoencoders extracted diagnostic features. Recursive Feature Elimination (RFE) with XGBoost was applied for feature selection. Attention mechanisms prioritized diagnostically significant regions. Classification performance was evaluated using accuracy, sensitivity, specificity, F1-score, and AUC-ROC, while segmentation was assessed using IoU, Dice score, and localization accuracy. Five-fold cross-validation ensured robustness, and Adam optimizer with early stopping was used for optimal model training.
Results
The proposed framework demonstrates high accuracy in both segmentation and classification for breast cancer detection. Attention-based segmentation achieved 91.5 % localization accuracy, with IoU = 0.87 and a Dice score of 0.89, ensuring precise identification of diagnostic regions. The multi-autoencoder classification model attained 94.2 % sensitivity and 96.4 % AUC in training, with 92.4 % sensitivity and 95.8 % AUC on independent testing, outperforming traditional statistical features. XGBoost surpassed other classifiers, including Random Forest, SVM, and Logistic Regression. These results validate the model's robustness, interpretability, and clinical applicability, establishing an AI-driven diagnostic tool for accurate breast cancer segmentation and classification.
Conclusions
The proposed framework advances breast cancer detection by offering high accuracy, adaptability, and interpretability. Future work should explore multimodal imaging integration and lightweight implementations for real-time deployment in clinical environments.
期刊介绍:
Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.