RIT-Eyes:眼动追踪应用的近眼图像渲染

Nitinraj Nair, Rakshit Kothari, A. Chaudhary, Zhizhuo Yang, Gabriel J. Diaz, J. Pelz, Reynold J. Bailey
{"title":"RIT-Eyes:眼动追踪应用的近眼图像渲染","authors":"Nitinraj Nair, Rakshit Kothari, A. Chaudhary, Zhizhuo Yang, Gabriel J. Diaz, J. Pelz, Reynold J. Bailey","doi":"10.1145/3385955.3407935","DOIUrl":null,"url":null,"abstract":"Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image generation platform that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available datasets, NVGaze and OpenEDS. We also report on the performance of two semantic segmentation architectures (SegNet and RITnet) trained on rendered images and tested on the original datasets.","PeriodicalId":434621,"journal":{"name":"ACM Symposium on Applied Perception 2020","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"RIT-Eyes: Rendering of near-eye images for eye-tracking applications\",\"authors\":\"Nitinraj Nair, Rakshit Kothari, A. Chaudhary, Zhizhuo Yang, Gabriel J. Diaz, J. Pelz, Reynold J. Bailey\",\"doi\":\"10.1145/3385955.3407935\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image generation platform that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available datasets, NVGaze and OpenEDS. We also report on the performance of two semantic segmentation architectures (SegNet and RITnet) trained on rendered images and tested on the original datasets.\",\"PeriodicalId\":434621,\"journal\":{\"name\":\"ACM Symposium on Applied Perception 2020\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Symposium on Applied Perception 2020\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3385955.3407935\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Symposium on Applied Perception 2020","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3385955.3407935","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

基于视频的眼动追踪的深度神经网络已经证明了对嘈杂环境、杂散反射和低分辨率的适应能力。然而,为了训练这些网络,需要大量的人工注释图像。为了减轻手工标注的繁琐过程,采用计算机图形渲染技术,自动生成各种标注条件下的大型眼睛图像语料库。在这项工作中,我们介绍了一个合成眼睛图像生成平台,该平台通过添加诸如活动变形虹膜、非球面角膜、视网膜反向反射、凝视协调的眼睑变形和眨眼等特征来改进先前的工作。为了展示我们平台的实用性,我们渲染了反映两个公开可用数据集(NVGaze和OpenEDS)固有的所代表的凝视分布的图像。我们还报告了在渲染图像上训练并在原始数据集上测试的两种语义分割架构(SegNet和RITnet)的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
RIT-Eyes: Rendering of near-eye images for eye-tracking applications
Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image generation platform that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available datasets, NVGaze and OpenEDS. We also report on the performance of two semantic segmentation architectures (SegNet and RITnet) trained on rendered images and tested on the original datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信