{"title":"Recognizing Gestures with Ambient Light","authors":"Raghav H. Venkatnarayan, Muhammad Shahzad","doi":"10.1145/3264877.3264883","DOIUrl":null,"url":null,"abstract":"There is growing interest in the research community to develop techniques for humans to communicate with the computing that is embedding into our environments. Researchers are exploring various modalities, such as radio-frequency signals, to develop gesture recognition systems. We explore another modality, namely ambient light, and develop LiGest, an ambient light based gesture recognition system. The idea behind LiGest is that when a user performs different gestures, the user's shadows move in distinct patterns. LiGest captures these patterns using a grid of floor-based light sensors and then builds training models to recognize unknown shadow samples. We design a prototype for LiGest and evaluate it across multiple users, positions, orientations and lighting conditions. Our results show that LiGest achieves an average accuracy of 96.36%.","PeriodicalId":62224,"journal":{"name":"世界中学生文摘","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"世界中学生文摘","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1145/3264877.3264883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There is growing interest in the research community to develop techniques for humans to communicate with the computing that is embedding into our environments. Researchers are exploring various modalities, such as radio-frequency signals, to develop gesture recognition systems. We explore another modality, namely ambient light, and develop LiGest, an ambient light based gesture recognition system. The idea behind LiGest is that when a user performs different gestures, the user's shadows move in distinct patterns. LiGest captures these patterns using a grid of floor-based light sensors and then builds training models to recognize unknown shadow samples. We design a prototype for LiGest and evaluate it across multiple users, positions, orientations and lighting conditions. Our results show that LiGest achieves an average accuracy of 96.36%.