{"title":"Development of a feature vector for accurate breast cancer detection in mammographic images","authors":"Aisulu Ismailova , Gulzira Abdikerimova , Nurgul Uzakkyzy , Raikhan Muratkhan , Murat Aitimov , Aliya Tergeusizova , Aliya Beissegul","doi":"10.1016/j.ijcce.2025.08.001","DOIUrl":null,"url":null,"abstract":"<div><div>Breast cancer remains one of the leading causes of mortality among women, making early and accurate detection crucial for effective treatment. Despite the extensive use of deep learning models in mammographic image classification, existing approaches often lack interpretability. They are prone to diagnostic errors due to image heterogeneity, noise, and the limited availability of annotated datasets. This study addresses these challenges by proposing a novel hybrid model that integrates handcrafted texture and geometric features—such as entropy, eccentricity, mean intensity, and GLCM descriptors—directly into a modified Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture. The primary objective is to improve both diagnostic accuracy and transparency in mammogram classification. Experiments were conducted on the publicly available VinDr-Mammo dataset, which includes 2136 annotated DICOM images with BI-RADS labels. The hybrid model demonstrated superior performance, achieving a 30% reduction in Total Loss, higher sensitivity (0.96), specificity (0.97), and ROC-AUC (0.96), compared to the baseline model without additional features. The integration of clinically interpretable descriptors enhances not only detection accuracy but also the explainability of the results, offering valuable insights for radiologists. These findings contribute to the development of AI-assisted diagnostic tools that are both robust and transparent, particularly in low-resource clinical environments.</div></div>","PeriodicalId":100694,"journal":{"name":"International Journal of Cognitive Computing in Engineering","volume":"7 ","pages":"Pages 12-25"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cognitive Computing in Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666307425000348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Breast cancer remains one of the leading causes of mortality among women, making early and accurate detection crucial for effective treatment. Despite the extensive use of deep learning models in mammographic image classification, existing approaches often lack interpretability. They are prone to diagnostic errors due to image heterogeneity, noise, and the limited availability of annotated datasets. This study addresses these challenges by proposing a novel hybrid model that integrates handcrafted texture and geometric features—such as entropy, eccentricity, mean intensity, and GLCM descriptors—directly into a modified Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture. The primary objective is to improve both diagnostic accuracy and transparency in mammogram classification. Experiments were conducted on the publicly available VinDr-Mammo dataset, which includes 2136 annotated DICOM images with BI-RADS labels. The hybrid model demonstrated superior performance, achieving a 30% reduction in Total Loss, higher sensitivity (0.96), specificity (0.97), and ROC-AUC (0.96), compared to the baseline model without additional features. The integration of clinically interpretable descriptors enhances not only detection accuracy but also the explainability of the results, offering valuable insights for radiologists. These findings contribute to the development of AI-assisted diagnostic tools that are both robust and transparent, particularly in low-resource clinical environments.