Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei
{"title":"Improving real-time CNN-based pupil detection through domain-specific data augmentation","authors":"Shaharam Eivazi, Thiago Santini, Alireza Keshavarzi, Thomas C. Kübler, Andrea Mazzei","doi":"10.1145/3314111.3319914","DOIUrl":null,"url":null,"abstract":"Deep learning is a promising technique for real-world pupil detection. However, the small amount of available accurately-annotated data poses a challenge when training such networks. Here, we utilize non-challenging eye videos where algorithmic approaches perform virtually without errors to automatically generate a foundational data set containing subpixel pupil annotations. Then, we propose multiple domain-specific data augmentation methods to create unique training sets containing controlled distributions of pupil-detection challenges. The feasibility, convenience, and advantage of this approach is demonstrated by training a CNN with these datasets. The resulting network outperformed current methods in multiple publicly-available, realistic, and challenging datasets, despite being trained solely with the augmented eye images. This network also exhibited better generalization w.r.t. the latest state-of-the-art CNN: Whereas on datasets similar to training data, the nets displayed similar performance, on datasets unseen to both networks, ours outperformed the state-of-the-art by ≈27% in terms of detection rate.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314111.3319914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
Deep learning is a promising technique for real-world pupil detection. However, the small amount of available accurately-annotated data poses a challenge when training such networks. Here, we utilize non-challenging eye videos where algorithmic approaches perform virtually without errors to automatically generate a foundational data set containing subpixel pupil annotations. Then, we propose multiple domain-specific data augmentation methods to create unique training sets containing controlled distributions of pupil-detection challenges. The feasibility, convenience, and advantage of this approach is demonstrated by training a CNN with these datasets. The resulting network outperformed current methods in multiple publicly-available, realistic, and challenging datasets, despite being trained solely with the augmented eye images. This network also exhibited better generalization w.r.t. the latest state-of-the-art CNN: Whereas on datasets similar to training data, the nets displayed similar performance, on datasets unseen to both networks, ours outperformed the state-of-the-art by ≈27% in terms of detection rate.