{"title":"Evaluating Instructions for Gesture Recognition with an Accelerometer","authors":"Kazuya Murao, T. Terada","doi":"10.11185/IMT.10.269","DOIUrl":null,"url":null,"abstract":"In the area of activity recognition with mobile sensors, a lot of works on context-aware systems using accelerometers have been proposed. Especially, mobile phones or remotes for video games using gesture recognition technologies enable easy and intuitive operations such as scrolling browser and drawing objects. Gesture input has an advantage of rich expressive power over the conventional interfaces, but it is difficult to share the gesture motion with other people through writing or verbally. Assuming that a commercial product using gestures is released, the developers make an instruction manual and tutorial expressing the gestures in text, figures, or videos. Then an end-user reads the instructions, imagines the gesture, then perform it. In this paper, we evaluate how user gestures change according to the types of the instruction. We obtained acceleration data for 10 kinds of gestures instructed through three types of texts, figures, and videos, totalling 44 patterns from 13 test subjects, for a total of 2,630 data samples. From the evaluation, gestures are correctly performed in the order of text→figure→video. Detailed instruction in texts is equivalent to that in figures. However, some words reflecting gestures disordered the users’ gestures since they could call multiple images to user’s mind.","PeriodicalId":16243,"journal":{"name":"Journal of Information Processing","volume":"10 1","pages":"269-280"},"PeriodicalIF":0.0000,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11185/IMT.10.269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
In the area of activity recognition with mobile sensors, a lot of works on context-aware systems using accelerometers have been proposed. Especially, mobile phones or remotes for video games using gesture recognition technologies enable easy and intuitive operations such as scrolling browser and drawing objects. Gesture input has an advantage of rich expressive power over the conventional interfaces, but it is difficult to share the gesture motion with other people through writing or verbally. Assuming that a commercial product using gestures is released, the developers make an instruction manual and tutorial expressing the gestures in text, figures, or videos. Then an end-user reads the instructions, imagines the gesture, then perform it. In this paper, we evaluate how user gestures change according to the types of the instruction. We obtained acceleration data for 10 kinds of gestures instructed through three types of texts, figures, and videos, totalling 44 patterns from 13 test subjects, for a total of 2,630 data samples. From the evaluation, gestures are correctly performed in the order of text→figure→video. Detailed instruction in texts is equivalent to that in figures. However, some words reflecting gestures disordered the users’ gestures since they could call multiple images to user’s mind.