Development of a feature vector for accurate breast cancer detection in mammographic images

Aisulu Ismailova , Gulzira Abdikerimova , Nurgul Uzakkyzy , Raikhan Muratkhan , Murat Aitimov , Aliya Tergeusizova , Aliya Beissegul
{"title":"Development of a feature vector for accurate breast cancer detection in mammographic images","authors":"Aisulu Ismailova ,&nbsp;Gulzira Abdikerimova ,&nbsp;Nurgul Uzakkyzy ,&nbsp;Raikhan Muratkhan ,&nbsp;Murat Aitimov ,&nbsp;Aliya Tergeusizova ,&nbsp;Aliya Beissegul","doi":"10.1016/j.ijcce.2025.08.001","DOIUrl":null,"url":null,"abstract":"<div><div>Breast cancer remains one of the leading causes of mortality among women, making early and accurate detection crucial for effective treatment. Despite the extensive use of deep learning models in mammographic image classification, existing approaches often lack interpretability. They are prone to diagnostic errors due to image heterogeneity, noise, and the limited availability of annotated datasets. This study addresses these challenges by proposing a novel hybrid model that integrates handcrafted texture and geometric features—such as entropy, eccentricity, mean intensity, and GLCM descriptors—directly into a modified Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture. The primary objective is to improve both diagnostic accuracy and transparency in mammogram classification. Experiments were conducted on the publicly available VinDr-Mammo dataset, which includes 2136 annotated DICOM images with BI-RADS labels. The hybrid model demonstrated superior performance, achieving a 30% reduction in Total Loss, higher sensitivity (0.96), specificity (0.97), and ROC-AUC (0.96), compared to the baseline model without additional features. The integration of clinically interpretable descriptors enhances not only detection accuracy but also the explainability of the results, offering valuable insights for radiologists. These findings contribute to the development of AI-assisted diagnostic tools that are both robust and transparent, particularly in low-resource clinical environments.</div></div>","PeriodicalId":100694,"journal":{"name":"International Journal of Cognitive Computing in Engineering","volume":"7 ","pages":"Pages 12-25"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cognitive Computing in Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666307425000348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Breast cancer remains one of the leading causes of mortality among women, making early and accurate detection crucial for effective treatment. Despite the extensive use of deep learning models in mammographic image classification, existing approaches often lack interpretability. They are prone to diagnostic errors due to image heterogeneity, noise, and the limited availability of annotated datasets. This study addresses these challenges by proposing a novel hybrid model that integrates handcrafted texture and geometric features—such as entropy, eccentricity, mean intensity, and GLCM descriptors—directly into a modified Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture. The primary objective is to improve both diagnostic accuracy and transparency in mammogram classification. Experiments were conducted on the publicly available VinDr-Mammo dataset, which includes 2136 annotated DICOM images with BI-RADS labels. The hybrid model demonstrated superior performance, achieving a 30% reduction in Total Loss, higher sensitivity (0.96), specificity (0.97), and ROC-AUC (0.96), compared to the baseline model without additional features. The integration of clinically interpretable descriptors enhances not only detection accuracy but also the explainability of the results, offering valuable insights for radiologists. These findings contribute to the development of AI-assisted diagnostic tools that are both robust and transparent, particularly in low-resource clinical environments.
在乳房x光影像中精确侦测乳癌的特征向量之发展
乳腺癌仍然是妇女死亡的主要原因之一,因此及早和准确发现对有效治疗至关重要。尽管深度学习模型在乳房x光图像分类中得到了广泛的应用,但现有的方法往往缺乏可解释性。由于图像的异质性、噪声和带注释的数据集的有限可用性,它们容易出现诊断错误。本研究通过提出一种新的混合模型来解决这些挑战,该模型将手工制作的纹理和几何特征(如熵、偏心率、平均强度和GLCM描述符)直接集成到改进的Faster基于区域的卷积神经网络(Faster R-CNN)架构中。主要目的是提高诊断的准确性和透明度在乳房x线照片分类。实验在公开的vdr - mamo数据集上进行,该数据集包括2136张带有BI-RADS标签的带注释的DICOM图像。与没有附加特征的基线模型相比,混合模型表现出优异的性能,总损失减少30%,灵敏度(0.96),特异性(0.97)和ROC-AUC(0.96)更高。临床可解释描述符的整合不仅提高了检测的准确性,而且提高了结果的可解释性,为放射科医生提供了有价值的见解。这些发现有助于开发既可靠又透明的人工智能辅助诊断工具,特别是在资源匮乏的临床环境中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信