Bram Hunt BS , Eugene Kwan PhD , Jake Bergquist PhD , James Brundage MD , Benjamin Orkild BS , Jiawei Dong PhD , Eric Paccione MS , Kyoichiro Yazaki MD , Rob S. MacLeod PhD , Derek J. Dosdall PhD , Tolga Tasdizen PhD , Ravi Ranjan MD, PhD
{"title":"Contrastive pretraining improves deep learning classification of endocardial electrograms in a preclinical model","authors":"Bram Hunt BS , Eugene Kwan PhD , Jake Bergquist PhD , James Brundage MD , Benjamin Orkild BS , Jiawei Dong PhD , Eric Paccione MS , Kyoichiro Yazaki MD , Rob S. MacLeod PhD , Derek J. Dosdall PhD , Tolga Tasdizen PhD , Ravi Ranjan MD, PhD","doi":"10.1016/j.hroo.2025.01.008","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Rotors and focal ectopies, or “drivers,” are hypothesized mechanisms of persistent atrial fibrillation (AF). Machine learning algorithms have been used to identify these drivers, but the limited size of current driver data sets constrains their performance.</div></div><div><h3>Objective</h3><div>We proposed that pretraining using unsupervised learning on a substantial data set of unlabeled electrograms could enhance classifier accuracy when applied to a smaller driver data set.</div></div><div><h3>Methods</h3><div>We used a SimCLR-based framework to pretrain a residual neural network on 113,000 unlabeled 64-electrode measurements from a canine model of AF. The network was then fine-tuned to identify drivers from intracardiac electrograms. Various augmentations, including cropping, Gaussian blurring, and rotation, were applied during pretraining to improve the robustness of the learned representations.</div></div><div><h3>Results</h3><div>Pretraining significantly improved driver detection accuracy compared with a non-pretrained network (80.8% vs 62.5%). The pretrained network also demonstrated greater resilience to reductions in training data set size, maintaining higher accuracy even with a 30% reduction in data. Gradient-weighted Class Activation Mapping analysis revealed that the network’s attention aligned well with manually annotated driver regions, suggesting that the network learned meaningful features for driver detection.</div></div><div><h3>Conclusion</h3><div>This study demonstrates that contrastive pretraining can enhance the accuracy of driver detection algorithms in AF. The findings support the broader application of transfer learning to other electrogram-based tasks, potentially improving outcomes in clinical electrophysiology.</div></div>","PeriodicalId":29772,"journal":{"name":"Heart Rhythm O2","volume":"6 4","pages":"Pages 473-480"},"PeriodicalIF":2.5000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Heart Rhythm O2","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666501825000169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CARDIAC & CARDIOVASCULAR SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Rotors and focal ectopies, or “drivers,” are hypothesized mechanisms of persistent atrial fibrillation (AF). Machine learning algorithms have been used to identify these drivers, but the limited size of current driver data sets constrains their performance.
Objective
We proposed that pretraining using unsupervised learning on a substantial data set of unlabeled electrograms could enhance classifier accuracy when applied to a smaller driver data set.
Methods
We used a SimCLR-based framework to pretrain a residual neural network on 113,000 unlabeled 64-electrode measurements from a canine model of AF. The network was then fine-tuned to identify drivers from intracardiac electrograms. Various augmentations, including cropping, Gaussian blurring, and rotation, were applied during pretraining to improve the robustness of the learned representations.
Results
Pretraining significantly improved driver detection accuracy compared with a non-pretrained network (80.8% vs 62.5%). The pretrained network also demonstrated greater resilience to reductions in training data set size, maintaining higher accuracy even with a 30% reduction in data. Gradient-weighted Class Activation Mapping analysis revealed that the network’s attention aligned well with manually annotated driver regions, suggesting that the network learned meaningful features for driver detection.
Conclusion
This study demonstrates that contrastive pretraining can enhance the accuracy of driver detection algorithms in AF. The findings support the broader application of transfer learning to other electrogram-based tasks, potentially improving outcomes in clinical electrophysiology.
背景推动者和局灶异位,或“驱动因素”,是持续性心房颤动(AF)的假设机制。机器学习算法已被用于识别这些驱动程序,但当前驱动程序数据集的有限规模限制了它们的性能。我们提出在大量未标记的电图数据集上使用无监督学习进行预训练,当应用于较小的驾驶员数据集时,可以提高分类器的准确性。方法使用基于simclr的框架对来自犬房颤模型的113,000个未标记的64个电极测量数据进行残差神经网络预训练。然后对该网络进行微调,以从心内电图中识别驱动因素。各种增强,包括裁剪,高斯模糊和旋转,在预训练期间应用,以提高学习表征的鲁棒性。结果与非预训练网络相比,预训练网络显著提高了驾驶员检测准确率(80.8% vs 62.5%)。预训练的网络对训练数据集大小的减少也表现出更大的弹性,即使数据减少30%,也能保持更高的准确性。梯度加权类激活映射分析显示,网络的注意力与手动注释的驾驶员区域很好地对齐,这表明网络学习了有意义的驾驶员检测特征。本研究表明,对比预训练可以提高AF中驾驶员检测算法的准确性。该研究结果支持将迁移学习广泛应用于其他基于电图的任务,可能改善临床电生理学的结果。