Improving real-time CNN-based pupil detection through domain-specific data augmentation

Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei
{"title":"Improving real-time CNN-based pupil detection through domain-specific data augmentation","authors":"Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei","doi":"10.1145/3314111.3319914","DOIUrl":null,"url":null,"abstract":"Deep learning is a promising technique for real-world pupil detection. However, the small amount of available accurately-annotated data poses a challenge when training such networks. Here, we utilize non-challenging eye videos where algorithmic approaches perform virtually without errors to automatically generate a foundational data set containing subpixel pupil annotations. Then, we propose multiple domain-specific data augmentation methods to create unique training sets containing controlled distributions of pupil-detection challenges. The feasibility, convenience, and advantage of this approach is demonstrated by training a CNN with these datasets. The resulting network outperformed current methods in multiple publicly-available, realistic, and challenging datasets, despite being trained solely with the augmented eye images. This network also exhibited better generalization w.r.t. the latest state-of-the-art CNN: Whereas on datasets similar to training data, the nets displayed similar performance, on datasets unseen to both networks, ours outperformed the state-of-the-art by ≈27% in terms of detection rate.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314111.3319914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

Abstract

Deep learning is a promising technique for real-world pupil detection. However, the small amount of available accurately-annotated data poses a challenge when training such networks. Here, we utilize non-challenging eye videos where algorithmic approaches perform virtually without errors to automatically generate a foundational data set containing subpixel pupil annotations. Then, we propose multiple domain-specific data augmentation methods to create unique training sets containing controlled distributions of pupil-detection challenges. The feasibility, convenience, and advantage of this approach is demonstrated by training a CNN with these datasets. The resulting network outperformed current methods in multiple publicly-available, realistic, and challenging datasets, despite being trained solely with the augmented eye images. This network also exhibited better generalization w.r.t. the latest state-of-the-art CNN: Whereas on datasets similar to training data, the nets displayed similar performance, on datasets unseen to both networks, ours outperformed the state-of-the-art by ≈27% in terms of detection rate.
通过特定领域数据增强改进基于cnn的实时瞳孔检测
深度学习是一种很有前途的瞳孔检测技术。然而,在训练这样的网络时,可用的准确注释数据量很少,这给训练这种网络带来了挑战。在这里,我们利用非挑战性的眼睛视频,其中算法方法几乎没有错误地自动生成包含亚像素瞳孔注释的基础数据集。然后,我们提出了多个特定领域的数据增强方法,以创建包含瞳孔检测挑战控制分布的独特训练集。通过使用这些数据集训练CNN,证明了这种方法的可行性、方便性和优越性。尽管只使用增强的眼睛图像进行训练,但最终的网络在多个公开可用的、现实的和具有挑战性的数据集上的表现优于当前的方法。与最新的CNN相比,该网络也表现出了更好的泛化能力:然而,在与训练数据相似的数据集上,网络表现出了相似的性能,在两个网络都看不到的数据集上,我们的网络在检测率方面比最新的CNN高出约27%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信