Hybrid transformer-based model for mammogram classification by integrating prior and current images

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2025-01-30 DOI:10.1002/mp.17650
Afsana Ahsan Jeny, Sahand Hamzehei, Annie Jin, Stephen Andrew Baker, Tucker Van Rathe, Jun Bai, Clifford Yang, Sheida Nabavi
{"title":"Hybrid transformer-based model for mammogram classification by integrating prior and current images","authors":"Afsana Ahsan Jeny,&nbsp;Sahand Hamzehei,&nbsp;Annie Jin,&nbsp;Stephen Andrew Baker,&nbsp;Tucker Van Rathe,&nbsp;Jun Bai,&nbsp;Clifford Yang,&nbsp;Sheida Nabavi","doi":"10.1002/mp.17650","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Breast cancer screening via mammography plays a crucial role in early detection, significantly impacting women's health outcomes worldwide. However, the manual analysis of mammographic images is time-consuming and requires specialized expertise, presenting substantial challenges in medical practice.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To address these challenges, we introduce a CNN-Transformer based model tailored for breast cancer classification through mammographic analysis. This model leverages both prior and current images to monitor temporal changes, aiming to enhance the efficiency and accuracy (ACC) of computer-aided diagnosis systems by mimicking the detailed examination process of radiologists.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>In this study, our proposed model incorporates a novel integration of a position-wise feedforward network and multi-head self-attention, enabling it to detect abnormal or cancerous changes in mammograms over time. Additionally, the model employs positional encoding and channel attention methods to accurately highlight critical spatial features, thus precisely differentiating between normal and cancerous tissues. Our methodology utilizes focal loss (FL) to precisely address challenging instances that are difficult to classify, reducing false negatives and false positives to improve diagnostic ACC.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>We compared our model with eight baseline models; specifically, we utilized only current images for the single model ResNet50 while employing both prior and current images for the remaining models in terms of accuracy (ACC), sensitivity (SEN), precision (PRE), specificity (SPE), F1 score, and area under the curve (AUC). The results demonstrate that the proposed model outperforms the baseline models, achieving an ACC of 90.80%, SEN of 90.80%, PRE of 90.80%, SPE of 90.88%, an F1 score of 90.95%, and an AUC of 92.58%. The codes and related information are available at https://github.com/NabaviLab/PCTM.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>Our proposed CNN-Transformer model integrates both prior and current images, removes long-range dependencies, and enhances its capability for nuanced classification. The application of FL reduces false positive rate (FPR) and false negative rates (FNR), improving both SEN and SPE. Furthermore, the model achieves the lowest false discovery rate and FNR across various abnormalities, including masses, calcification, and architectural distortions (ADs). These low error rates highlight the model's reliability and underscore its potential to improve early breast cancer detection in clinical practice.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 5","pages":"2999-3014"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17650","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Breast cancer screening via mammography plays a crucial role in early detection, significantly impacting women's health outcomes worldwide. However, the manual analysis of mammographic images is time-consuming and requires specialized expertise, presenting substantial challenges in medical practice.

Purpose

To address these challenges, we introduce a CNN-Transformer based model tailored for breast cancer classification through mammographic analysis. This model leverages both prior and current images to monitor temporal changes, aiming to enhance the efficiency and accuracy (ACC) of computer-aided diagnosis systems by mimicking the detailed examination process of radiologists.

Methods

In this study, our proposed model incorporates a novel integration of a position-wise feedforward network and multi-head self-attention, enabling it to detect abnormal or cancerous changes in mammograms over time. Additionally, the model employs positional encoding and channel attention methods to accurately highlight critical spatial features, thus precisely differentiating between normal and cancerous tissues. Our methodology utilizes focal loss (FL) to precisely address challenging instances that are difficult to classify, reducing false negatives and false positives to improve diagnostic ACC.

Results

We compared our model with eight baseline models; specifically, we utilized only current images for the single model ResNet50 while employing both prior and current images for the remaining models in terms of accuracy (ACC), sensitivity (SEN), precision (PRE), specificity (SPE), F1 score, and area under the curve (AUC). The results demonstrate that the proposed model outperforms the baseline models, achieving an ACC of 90.80%, SEN of 90.80%, PRE of 90.80%, SPE of 90.88%, an F1 score of 90.95%, and an AUC of 92.58%. The codes and related information are available at https://github.com/NabaviLab/PCTM.

Conclusions

Our proposed CNN-Transformer model integrates both prior and current images, removes long-range dependencies, and enhances its capability for nuanced classification. The application of FL reduces false positive rate (FPR) and false negative rates (FNR), improving both SEN and SPE. Furthermore, the model achieves the lowest false discovery rate and FNR across various abnormalities, including masses, calcification, and architectural distortions (ADs). These low error rates highlight the model's reliability and underscore its potential to improve early breast cancer detection in clinical practice.

基于混合变压器的乳房x线影像分类模型。
背景:通过乳房x光检查进行乳腺癌筛查在早期发现中起着至关重要的作用,显著影响着全世界妇女的健康结果。然而,乳房x线摄影图像的人工分析是耗时的,需要专业知识,在医疗实践中提出了实质性的挑战。目的:为了解决这些挑战,我们引入了一个基于CNN-Transformer的模型,通过乳房x线摄影分析为乳腺癌分类量身定制。该模型利用先前和当前图像来监测时间变化,旨在通过模仿放射科医生的详细检查过程来提高计算机辅助诊断系统的效率和准确性(ACC)。方法:在本研究中,我们提出的模型结合了位置前瞻网络和多头自我关注的新整合,使其能够随着时间的推移检测乳房x线照片中的异常或癌性变化。此外,该模型采用位置编码和通道注意方法,准确突出关键空间特征,从而精确区分正常组织和癌组织。我们的方法利用焦损(FL)来精确地解决难以分类的具有挑战性的实例,减少假阴性和假阳性,以提高ACC的诊断。结果:我们将我们的模型与8个基线模型进行了比较;具体来说,我们仅对单一模型ResNet50使用当前图像,而在准确性(ACC)、灵敏度(SEN)、精度(PRE)、特异性(SPE)、F1评分和曲线下面积(AUC)方面,对其余模型使用先前和当前图像。结果表明,该模型优于基线模型,ACC为90.80%,SEN为90.80%,PRE为90.80%,SPE为90.88%,F1得分为90.95%,AUC为92.58%。我们提出的CNN-Transformer模型集成了先前和当前图像,消除了远程依赖关系,并增强了其精细分类的能力。FL的应用降低了假阳性率(FPR)和假阴性率(FNR),提高了SEN和SPE。此外,该模型在各种异常(包括肿块、钙化和结构扭曲(ad))中实现了最低的错误发现率和FNR。这些低错误率突出了模型的可靠性,并强调了其在临床实践中提高早期乳腺癌检测的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信