Akshay S , Amudha J , Amitabh Bhattacharya , Nitish Kamble , Pramod Kumar Pal
{"title":"EyeMap: A fusion-based method for eye movement-based visual attention maps as predictive markers of parkinsonism","authors":"Akshay S , Amudha J , Amitabh Bhattacharya , Nitish Kamble , Pramod Kumar Pal","doi":"10.1016/j.mex.2025.103607","DOIUrl":null,"url":null,"abstract":"<div><div>EyeMap is a method for visualizing and classifying eye movement patterns using scanpaths, fixation heatmaps, and gridded Areas of Interest (AOIs). EyeMap combines predictions from modality-specific machine learning and deep learning models using a late-fusion technique to produce interpretable gaze representations. By collecting spatial, temporal, and regional elements of gaze data, the method enhances diagnostic interpretability and enables the detection of Parkinsonian symptoms. This method provides complementary perspectives on gaze behavior, encompassing spatial focus, temporal scan order, and attention allocation across regions of interest. A dataset consisting of visualizations of organized visual tasks completed by both PD patients and healthy controls is created to support the development and validation of this method. EyeMap shows that vision-driven models may detect PD-specific gaze anomalies without the need for manual feature engineering. All implementation steps, from data acquisition to model fusion, are fully described to enable reproducibility and potential adaptation to other gaze-based analysis contexts.<ul><li><span>1.</span><span><div>A structured method was developed to visualize eye-tracking data in three distinct formats</div></span></li><li><span>2.</span><span><div>Classification outputs from separate gaze visualizations were combined using softmax-level fusion</div></span></li><li><span>3.</span><span><div>A new eye-tracking dataset was generated to support method development and reproducibility</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103607"},"PeriodicalIF":1.9000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MethodsX","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2215016125004510","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
EyeMap is a method for visualizing and classifying eye movement patterns using scanpaths, fixation heatmaps, and gridded Areas of Interest (AOIs). EyeMap combines predictions from modality-specific machine learning and deep learning models using a late-fusion technique to produce interpretable gaze representations. By collecting spatial, temporal, and regional elements of gaze data, the method enhances diagnostic interpretability and enables the detection of Parkinsonian symptoms. This method provides complementary perspectives on gaze behavior, encompassing spatial focus, temporal scan order, and attention allocation across regions of interest. A dataset consisting of visualizations of organized visual tasks completed by both PD patients and healthy controls is created to support the development and validation of this method. EyeMap shows that vision-driven models may detect PD-specific gaze anomalies without the need for manual feature engineering. All implementation steps, from data acquisition to model fusion, are fully described to enable reproducibility and potential adaptation to other gaze-based analysis contexts.
1.
A structured method was developed to visualize eye-tracking data in three distinct formats
2.
Classification outputs from separate gaze visualizations were combined using softmax-level fusion
3.
A new eye-tracking dataset was generated to support method development and reproducibility