贡献演讲一:利用计算机视觉从光感受器马赛克的AOSLO图像中检测和表征微眼跳。

IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY
Maria Villamil, Allie C Schneider, Jiahe Cui, Laura K Young, Hannah E Smithson
{"title":"贡献演讲一:利用计算机视觉从光感受器马赛克的AOSLO图像中检测和表征微眼跳。","authors":"Maria Villamil, Allie C Schneider, Jiahe Cui, Laura K Young, Hannah E Smithson","doi":"10.1167/jov.25.5.5","DOIUrl":null,"url":null,"abstract":"<p><p>Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"5"},"PeriodicalIF":2.3000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Contributed Talks I: Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision.\",\"authors\":\"Maria Villamil, Allie C Schneider, Jiahe Cui, Laura K Young, Hannah E Smithson\",\"doi\":\"10.1167/jov.25.5.5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.</p>\",\"PeriodicalId\":49955,\"journal\":{\"name\":\"Journal of Vision\",\"volume\":\"25 5\",\"pages\":\"5\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Vision\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/jov.25.5.5\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.25.5.5","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

注视性眼动(FEMs),特别是微跳(MS),是神经退行性疾病的有前途的生物标志物。使用自适应光学扫描激光检眼镜(AOSLO)获得的光感受器马赛克的体内图像被眼球运动系统地扭曲。从AOSLO数据中提取fem的大多数方法依赖于与无运动参考的比较,给出眼睛位置作为时间的函数。随后使用自适应速度阈值识别MS (Engbert & Kliegl, 2003)。我们使用计算机视觉和机器学习(ML)直接从原始AOSLO图像中检测和表征MS。为了训练和验证,我们使用了仿真视网膜图像捕获(ERICA),这是一个开源工具,用于生成视网膜图像和地面真实速度剖面的合成AOSLO数据集(Young & Smithson, 2021)。为了对包含MS的AOSLO图像的区域进行分类,将图像划分为32 × 32像素的子图像网格。将与aslo快速扫描对齐的一排排子图像的预测结果结合起来,得到1ms的分辨率。在可能的MS位移幅度和角度上,模型的性能很高(F1得分>0.92),大多数误差接近分类的速度阈值。直接速度预测也来自回归ML模型。我们表明,机器学习模型可以系统地适应于推广到真实的体内图像,允许在比基于视频的眼动仪更精细的空间尺度上表征MS。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Contributed Talks I: Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision.

Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Vision
Journal of Vision 医学-眼科学
CiteScore
2.90
自引率
5.60%
发文量
218
审稿时长
3-6 weeks
期刊介绍: Exploring all aspects of biological visual function, including spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信