Microsaccade Segmentation using Directional Variance Analysis and Artificial Neural Networks

S. Suthaharan, Lee Daniel M.W., Min Zhang, E. Rossi
{"title":"Microsaccade Segmentation using Directional Variance Analysis and Artificial Neural Networks","authors":"S. Suthaharan, Lee Daniel M.W., Min Zhang, E. Rossi","doi":"10.1109/IRI58017.2023.00008","DOIUrl":null,"url":null,"abstract":"Fixational eye movements (FEMs) are an essential component of vision and there is considerable research interest in using them as biomarkers of brain injury and neurodegeneration. Study of FEMs often involves segmenting them into their individual components, primarily microsaccades and drifts. In practice, velocity (or acceleration) thresholds are commonly adapted-while they are generally imperfect-requiring tuning of thresholds and manual correction and verification by human graders. Manual segmentation and correction is a tedious and time-consuming process for human graders. Fortunately, it can be observed from Tracking scanning laser ophthalmoscopy (TSLO) video recordings that the directional variances of FEMs can be extracted to mathematically characterize microsaccades for segmentation and distinguish from drift. Therefore, we perform a directional variance analysis, extract relevant features, and automate the model using artificial neural networks (ANN). We propose and compare two directional variance approaches along with an ANN model for the segmentation of microsaccades. The first approach utilizes a single-point based feature variance, whereas the second approach utilizes a sliding-window based feature variance with the information from several time points. We calculate several statistical metrics to characterize the features of the microsaccades such as the number of microsaccades, microsaccade peak velocity and acceleration, and microsaccade duration. We have also calculated the accuracy, precision, sensitivity, and specificity scores for each approach to compare their performance. The single-point models labeled the FEM data with an accuracy of 70% whereas the sliding-window approach had an accuracy of 85%. When comparing the percent errors of the approaches to the ground truth, the sliding-window approach performs significantly better than the single-point approach as it captures more relevant directional variance features of FEMs.","PeriodicalId":290818,"journal":{"name":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRI58017.2023.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Fixational eye movements (FEMs) are an essential component of vision and there is considerable research interest in using them as biomarkers of brain injury and neurodegeneration. Study of FEMs often involves segmenting them into their individual components, primarily microsaccades and drifts. In practice, velocity (or acceleration) thresholds are commonly adapted-while they are generally imperfect-requiring tuning of thresholds and manual correction and verification by human graders. Manual segmentation and correction is a tedious and time-consuming process for human graders. Fortunately, it can be observed from Tracking scanning laser ophthalmoscopy (TSLO) video recordings that the directional variances of FEMs can be extracted to mathematically characterize microsaccades for segmentation and distinguish from drift. Therefore, we perform a directional variance analysis, extract relevant features, and automate the model using artificial neural networks (ANN). We propose and compare two directional variance approaches along with an ANN model for the segmentation of microsaccades. The first approach utilizes a single-point based feature variance, whereas the second approach utilizes a sliding-window based feature variance with the information from several time points. We calculate several statistical metrics to characterize the features of the microsaccades such as the number of microsaccades, microsaccade peak velocity and acceleration, and microsaccade duration. We have also calculated the accuracy, precision, sensitivity, and specificity scores for each approach to compare their performance. The single-point models labeled the FEM data with an accuracy of 70% whereas the sliding-window approach had an accuracy of 85%. When comparing the percent errors of the approaches to the ground truth, the sliding-window approach performs significantly better than the single-point approach as it captures more relevant directional variance features of FEMs.
基于方向方差分析和人工神经网络的微眼动分割
注视眼动(FEMs)是视觉的重要组成部分,将其作为脑损伤和神经变性的生物标志物有相当大的研究兴趣。fem的研究通常涉及到将它们分割成单独的组成部分,主要是微跳变和漂移。在实践中,速度(或加速度)阈值通常是适应的——尽管它们通常是不完美的——需要调整阈值,并由人类评分员进行手动校正和验证。人工分割和校正对于人类评分来说是一个繁琐而耗时的过程。幸运的是,从跟踪扫描激光检眼镜(TSLO)视频记录中可以观察到,可以提取fem的方向方差,以数学方式表征微跳,用于分割和区分漂移。因此,我们进行了方向方差分析,提取相关特征,并使用人工神经网络(ANN)自动化模型。我们提出并比较了两种方向方差方法以及一种人工神经网络模型对微跳的分割。第一种方法利用基于单点的特征方差,而第二种方法利用基于滑动窗口的特征方差,其中包含来自多个时间点的信息。我们计算了几个统计指标来表征微眼跳的特征,如微眼跳的数量、微眼跳的峰值速度和加速度以及微眼跳持续时间。我们还计算了每种方法的准确性、精密度、灵敏度和特异性分数,以比较它们的性能。单点模型标记FEM数据的精度为70%,而滑动窗口方法的精度为85%。当比较方法与地面真实值的误差百分比时,滑动窗口方法明显优于单点方法,因为它捕获了更多相关的fem方向方差特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信