Su Myat Thwin, S. Malebary, A. Abulfaraj, Hyun-Seok Park
{"title":"Attention-Based Ensemble Network for Effective Breast Cancer Classification over Benchmarks","authors":"Su Myat Thwin, S. Malebary, A. Abulfaraj, Hyun-Seok Park","doi":"10.3390/technologies12020016","DOIUrl":null,"url":null,"abstract":"Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods.","PeriodicalId":504839,"journal":{"name":"Technologies","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/technologies12020016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Globally, breast cancer (BC) is considered a major cause of death among women. Therefore, researchers have used various machine and deep learning-based methods for its early and accurate detection using X-ray, MRI, and mammography image modalities. However, the machine learning model requires domain experts to select an optimal feature, obtains a limited accuracy, and has a high false positive rate due to handcrafting features extraction. The deep learning model overcomes these limitations, but these models require large amounts of training data and computation resources, and further improvement in the model performance is needed. To do this, we employ a novel framework called the Ensemble-based Channel and Spatial Attention Network (ECS-A-Net) to automatically classify infected regions within BC images. The proposed framework consists of two phases: in the first phase, we apply different augmentation techniques to enhance the size of the input data, while the second phase includes an ensemble technique that parallelly leverages modified SE-ResNet50 and InceptionV3 as a backbone for feature extraction, followed by Channel Attention (CA) and Spatial Attention (SA) modules in a series manner for more dominant feature selection. To further validate the ECS-A-Net, we conducted extensive experiments between several competitive state-of-the-art (SOTA) techniques over two benchmarks, including DDSM and MIAS, where the proposed model achieved 96.50% accuracy for the DDSM and 95.33% accuracy for the MIAS datasets. Additionally, the experimental results demonstrated that our network achieved a better performance using various evaluation indicators, including accuracy, sensitivity, and specificity among other methods.