{"title":"Creating gaze annotations in head mounted displays","authors":"D. Mardanbegi, Pernilla Qvarfordt","doi":"10.1145/2802083.2808404","DOIUrl":null,"url":null,"abstract":"To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annotation, the user simply captures an image using the HMD's camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressiveness compared to annotations of the image as a whole.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"175 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2802083.2808404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annotation, the user simply captures an image using the HMD's camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressiveness compared to annotations of the image as a whole.