Takuma Hashimoto, Suzanne Low, K. Fujita, Risa Usumi, Hiroshi Yanagihara, Chihiro Takahashi, M. Sugimoto, Yuta Sugiura
{"title":"TongueInput: Input Method by Tongue Gestures Using Optical Sensors Embedded in Mouthpiece","authors":"Takuma Hashimoto, Suzanne Low, K. Fujita, Risa Usumi, Hiroshi Yanagihara, Chihiro Takahashi, M. Sugimoto, Yuta Sugiura","doi":"10.23919/SICE.2018.8492690","DOIUrl":null,"url":null,"abstract":"We proposed a system to recognize tongue gestures by mounting a mouthpiece embedded with an array of photo-reflective sensors, to measure the changes in distance between the tongue surface and the back of the upper teeth when the tongue moves. The system utilizes grayscale images of the sensor values to calculate the HOG feature descriptor and to use SVM to recognize the gesture. We conducted two experiments to evaluate the accuracy of the system to estimate 4 tongue positions and 4 tongue gestures, where we obtained a recognition rate of 85.67% for positions and 77.5% for gestures. However, we observed that we can improve the rate by improving the issues we discovered.","PeriodicalId":425164,"journal":{"name":"2018 57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/SICE.2018.8492690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
We proposed a system to recognize tongue gestures by mounting a mouthpiece embedded with an array of photo-reflective sensors, to measure the changes in distance between the tongue surface and the back of the upper teeth when the tongue moves. The system utilizes grayscale images of the sensor values to calculate the HOG feature descriptor and to use SVM to recognize the gesture. We conducted two experiments to evaluate the accuracy of the system to estimate 4 tongue positions and 4 tongue gestures, where we obtained a recognition rate of 85.67% for positions and 77.5% for gestures. However, we observed that we can improve the rate by improving the issues we discovered.