{"title":"重新定义乳腺癌检测:整合Xception和EfficientNet-B5,实现卓越的乳房x线摄影成像","authors":"Niha Talukdar , Anchita Kakati , Upasana Barman , Jyoti Prakash Medhi , Kandarpa Kumar Sarma , Geetanjali Barman , Binoy Kumar Choudhury","doi":"10.1016/j.ibreh.2025.100038","DOIUrl":null,"url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Breast cancer is a major cause of female mortality, with early detection crucial for effective treatment. This study aims to enhance mammographic breast cancer detection by integrating an autoencoder’s encoder unit with advanced Convolutional Neural Networks (CNNs), incorporating soft attention gates and feature merging for improved accuracy. A custom multi-head attention mechanism is utilized for precise tumor segmentation, with a focus on robust validation across diverse datasets.</div></div><div><h3>Methods:</h3><div>The research involved 3816 mammogram samples (2376 benign, 1440 malignant) and employed deep learning techniques combining model fusion and autoencoders for classification. A custom multi-head attention mechanism was applied for tumor segmentation. The models were validated on publicly available datasets, MIAS and CBIS-DDSM.</div></div><div><h3>Results:</h3><div>On the MIAS dataset, Xception and EfficientNet-B5 CNN models outperformed others, achieving a classification accuracy of 96.88% after autoencoder integration. For segmentation, the model demonstrated strong alignment with tumor regions, achieving a Dice Coefficient of 0.4353, Intersection over Union (IoU) of 0.2998, and F1-Score of 0.4318.</div></div><div><h3>Conclusion:</h3><div>This study developed a robust deep learning approach combining Xception and EfficientNet-B5 for breast cancer diagnosis and segmentation. The fused model demonstrated high classification accuracy and reliable segmentation performance, indicating strong potential for clinical applications in early breast cancer detection and treatment planning.</div></div>","PeriodicalId":100675,"journal":{"name":"Innovative Practice in Breast Health","volume":"7 ","pages":"Article 100038"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Breast cancer detection redefined: Integrating Xception and EfficientNet-B5 for superior mammography imaging\",\"authors\":\"Niha Talukdar , Anchita Kakati , Upasana Barman , Jyoti Prakash Medhi , Kandarpa Kumar Sarma , Geetanjali Barman , Binoy Kumar Choudhury\",\"doi\":\"10.1016/j.ibreh.2025.100038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background and Objective:</h3><div>Breast cancer is a major cause of female mortality, with early detection crucial for effective treatment. This study aims to enhance mammographic breast cancer detection by integrating an autoencoder’s encoder unit with advanced Convolutional Neural Networks (CNNs), incorporating soft attention gates and feature merging for improved accuracy. A custom multi-head attention mechanism is utilized for precise tumor segmentation, with a focus on robust validation across diverse datasets.</div></div><div><h3>Methods:</h3><div>The research involved 3816 mammogram samples (2376 benign, 1440 malignant) and employed deep learning techniques combining model fusion and autoencoders for classification. A custom multi-head attention mechanism was applied for tumor segmentation. The models were validated on publicly available datasets, MIAS and CBIS-DDSM.</div></div><div><h3>Results:</h3><div>On the MIAS dataset, Xception and EfficientNet-B5 CNN models outperformed others, achieving a classification accuracy of 96.88% after autoencoder integration. For segmentation, the model demonstrated strong alignment with tumor regions, achieving a Dice Coefficient of 0.4353, Intersection over Union (IoU) of 0.2998, and F1-Score of 0.4318.</div></div><div><h3>Conclusion:</h3><div>This study developed a robust deep learning approach combining Xception and EfficientNet-B5 for breast cancer diagnosis and segmentation. The fused model demonstrated high classification accuracy and reliable segmentation performance, indicating strong potential for clinical applications in early breast cancer detection and treatment planning.</div></div>\",\"PeriodicalId\":100675,\"journal\":{\"name\":\"Innovative Practice in Breast Health\",\"volume\":\"7 \",\"pages\":\"Article 100038\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Innovative Practice in Breast Health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2950212825000041\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Innovative Practice in Breast Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950212825000041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
背景与目的:乳腺癌是女性死亡的主要原因,早期发现对有效治疗至关重要。本研究旨在通过集成自动编码器的编码器单元与先进的卷积神经网络(cnn),结合软注意门和特征合并来提高准确性,从而提高乳房x线摄影乳腺癌检测。一个自定义的多头注意机制用于精确的肿瘤分割,重点是跨不同数据集的鲁棒验证。方法:选取3816份乳腺x线影像样本(其中良性2376份,恶性1440份),采用深度学习技术结合模型融合和自编码器进行分类。采用自定义多头注意机制进行肿瘤分割。模型在公开可用的数据集,MIAS和CBIS-DDSM上进行验证。结果:在MIAS数据集上,Xception和EfficientNet-B5 CNN模型表现优于其他模型,自编码器集成后的分类准确率达到96.88%。在分割方面,该模型与肿瘤区域具有较强的一致性,Dice系数为0.4353,Intersection over Union (IoU)为0.2998,F1-Score为0.4318。结论:本研究开发了一种结合Xception和EfficientNet-B5的鲁棒深度学习方法,用于乳腺癌的诊断和分割。该融合模型具有较高的分类准确率和可靠的分割性能,在乳腺癌早期检测和治疗规划中具有较大的临床应用潜力。
Breast cancer detection redefined: Integrating Xception and EfficientNet-B5 for superior mammography imaging
Background and Objective:
Breast cancer is a major cause of female mortality, with early detection crucial for effective treatment. This study aims to enhance mammographic breast cancer detection by integrating an autoencoder’s encoder unit with advanced Convolutional Neural Networks (CNNs), incorporating soft attention gates and feature merging for improved accuracy. A custom multi-head attention mechanism is utilized for precise tumor segmentation, with a focus on robust validation across diverse datasets.
Methods:
The research involved 3816 mammogram samples (2376 benign, 1440 malignant) and employed deep learning techniques combining model fusion and autoencoders for classification. A custom multi-head attention mechanism was applied for tumor segmentation. The models were validated on publicly available datasets, MIAS and CBIS-DDSM.
Results:
On the MIAS dataset, Xception and EfficientNet-B5 CNN models outperformed others, achieving a classification accuracy of 96.88% after autoencoder integration. For segmentation, the model demonstrated strong alignment with tumor regions, achieving a Dice Coefficient of 0.4353, Intersection over Union (IoU) of 0.2998, and F1-Score of 0.4318.
Conclusion:
This study developed a robust deep learning approach combining Xception and EfficientNet-B5 for breast cancer diagnosis and segmentation. The fused model demonstrated high classification accuracy and reliable segmentation performance, indicating strong potential for clinical applications in early breast cancer detection and treatment planning.