Zeynep Hilal Kilimci , Mustafa Yalcin , Ayhan Kucukmanisa , Amit Kumar Mishra
{"title":"Advancing heart disease diagnosis with vision-based transformer architectures applied to ECG imagery","authors":"Zeynep Hilal Kilimci , Mustafa Yalcin , Ayhan Kucukmanisa , Amit Kumar Mishra","doi":"10.1016/j.imavis.2025.105666","DOIUrl":null,"url":null,"abstract":"<div><div>Cardiovascular disease, a critical medical condition that affects the heart and blood vessels, requires timely detection for effective clinical intervention. This includes coronary artery disease, heart failure, and myocardial infarction. Our goal is to improve the detection of heart disease through proactive interventions and personalized treatments. Early identification of at-risk individuals using advanced technologies can mitigate disease progression and reduce adverse outcomes. Using recent technological advancements, we propose a novel approach for heart disease detection using vision transformer models, namely Google-Vit, Microsoft-Beit, Deit, and Swin-Tiny. This marks the initial application of transformer models to image-based electrocardiogram (ECG) data for the detection of heart disease. The experimental results demonstrate the efficacy of vision transformers in this domain, with BEiT achieving the highest classification accuracy of 95.9% in a 5-fold cross-validation setting, further improving to 96.6% using an 80-20 holdout method. Swin-Tiny also exhibited strong performance with an accuracy of 95.2%, while Google-ViT and DeiT achieved 94.3% and 94.9%, respectively, outperforming many traditional models in ECG-based diagnostics. These findings highlight the potential of vision transformer models in enhancing diagnostic accuracy and risk stratification. The results further underscore the importance of model selection in optimizing performance, with BEiT emerging as the most promising candidate. This study contributes to the growing body of research on transformer-based medical diagnostics and paves the way for future investigations into their clinical applicability and generalizability.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105666"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002549","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Cardiovascular disease, a critical medical condition that affects the heart and blood vessels, requires timely detection for effective clinical intervention. This includes coronary artery disease, heart failure, and myocardial infarction. Our goal is to improve the detection of heart disease through proactive interventions and personalized treatments. Early identification of at-risk individuals using advanced technologies can mitigate disease progression and reduce adverse outcomes. Using recent technological advancements, we propose a novel approach for heart disease detection using vision transformer models, namely Google-Vit, Microsoft-Beit, Deit, and Swin-Tiny. This marks the initial application of transformer models to image-based electrocardiogram (ECG) data for the detection of heart disease. The experimental results demonstrate the efficacy of vision transformers in this domain, with BEiT achieving the highest classification accuracy of 95.9% in a 5-fold cross-validation setting, further improving to 96.6% using an 80-20 holdout method. Swin-Tiny also exhibited strong performance with an accuracy of 95.2%, while Google-ViT and DeiT achieved 94.3% and 94.9%, respectively, outperforming many traditional models in ECG-based diagnostics. These findings highlight the potential of vision transformer models in enhancing diagnostic accuracy and risk stratification. The results further underscore the importance of model selection in optimizing performance, with BEiT emerging as the most promising candidate. This study contributes to the growing body of research on transformer-based medical diagnostics and paves the way for future investigations into their clinical applicability and generalizability.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.