{"title":"An Eye Opener on the Use of Machine Learning in Eye Movement Based Authentication","authors":"Siyuan Peng, N. A. Madi","doi":"10.1145/3517031.3531631","DOIUrl":null,"url":null,"abstract":"The viability and need for eye movement-based authentication has been well established in light of the recent adoption of Virtual Reality headsets and Augmented Reality glasses. Previous research has demonstrated the practicality of eye movement-based authentication, but there still remains space for improvement in achieving higher identification accuracy. In this study, we focus on incorporating linguistic features in eye movement based authentication, and we compare our approach to authentication based purely on common first-order metrics across 9 machine learning models. Using GazeBase, a large eye movement dataset with 322 participants, and the CELEX lexical database, we show that AdaBoost classifier is the best performing model with an average F1 score of 74.6%. More importantly, we show that the use of linguistic features increased the accuracy of most classification models. Our results provide insights on the use of machine learning models, and motivate more work on incorporating text analysis in eye movement based authentication.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Symposium on Eye Tracking Research and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3517031.3531631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The viability and need for eye movement-based authentication has been well established in light of the recent adoption of Virtual Reality headsets and Augmented Reality glasses. Previous research has demonstrated the practicality of eye movement-based authentication, but there still remains space for improvement in achieving higher identification accuracy. In this study, we focus on incorporating linguistic features in eye movement based authentication, and we compare our approach to authentication based purely on common first-order metrics across 9 machine learning models. Using GazeBase, a large eye movement dataset with 322 participants, and the CELEX lexical database, we show that AdaBoost classifier is the best performing model with an average F1 score of 74.6%. More importantly, we show that the use of linguistic features increased the accuracy of most classification models. Our results provide insights on the use of machine learning models, and motivate more work on incorporating text analysis in eye movement based authentication.