{"title":"KissGlass","authors":"R. Li, Juyoung Lee, Woontack Woo, Thad Starner","doi":"10.1145/3384657.3384801","DOIUrl":null,"url":null,"abstract":"Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Augmented Humans International Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3384657.3384801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.