{"title":"谷歌眼镜上的免提动作交互","authors":"Zhihan Lv, Liangbing Feng, Haibo Li, Shengzhong Feng","doi":"10.1145/2669062.2669066","DOIUrl":null,"url":null,"abstract":"There is an increasing interest in creating wearable device interaction technologies. Novel emerging user interface technologies (e.g. eye-ball tracking, speech recognition, gesture recognition, ECG, EEG and fusion of them) have the potential to significantly affect market share in PC, smartphones, tablets and latest wearable devices such as google glass. As a result, displacing these technologies in devices such as smart phones and wearable devices is challenging. Google glass has many impressive characteristics (i.e. voice actions, head wake up, wink detection), which are human-glass interface (HGI) technologies. Google glass won't meet the 'the occlusion problem' and 'the fat finger problem' any more, which are the problems of direct-touch finger input on touch screen. However, google glass only provides a touchpad that includes haptics with simple 'tapping and sliding your finger' gestures which is a one-dimensional interaction in fact, instead of the traditional two-dimensional interaction based on the complete touch screen of smartphone. The one-dimensional 'swipe the touchpad' interaction with a row of 'Cards' which replace traditional two-dimensional icon menu limits the intuitive and flexibility of HGI. Therefore, there is a growing interest in implementing 3D gesture recognition vision systems in which optical sensors capture real-time video of the user and ubiquitous algorithms are then used to determine what the user's gestures are, without the user having to hold any device. We will demonstrate a hand-free motion interaction application based on computer vision technology on google glass. Presented application allows user to perform touch-less interaction by hand or foot gesture in front of the camera of google glass. Based on the same core ubiquitous gestures recognition algorithm as used in this demonstration, a hybrid wearable smartphone system based on mixed hardware and software has been presented in our previous work [Lv 2013][Lu et al. 2013][Lv et al. 2013], which can support either hand or foot interaction with today' smartphone.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"93","resultStr":"{\"title\":\"Hand-free motion interaction on Google Glass\",\"authors\":\"Zhihan Lv, Liangbing Feng, Haibo Li, Shengzhong Feng\",\"doi\":\"10.1145/2669062.2669066\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is an increasing interest in creating wearable device interaction technologies. Novel emerging user interface technologies (e.g. eye-ball tracking, speech recognition, gesture recognition, ECG, EEG and fusion of them) have the potential to significantly affect market share in PC, smartphones, tablets and latest wearable devices such as google glass. As a result, displacing these technologies in devices such as smart phones and wearable devices is challenging. Google glass has many impressive characteristics (i.e. voice actions, head wake up, wink detection), which are human-glass interface (HGI) technologies. Google glass won't meet the 'the occlusion problem' and 'the fat finger problem' any more, which are the problems of direct-touch finger input on touch screen. However, google glass only provides a touchpad that includes haptics with simple 'tapping and sliding your finger' gestures which is a one-dimensional interaction in fact, instead of the traditional two-dimensional interaction based on the complete touch screen of smartphone. The one-dimensional 'swipe the touchpad' interaction with a row of 'Cards' which replace traditional two-dimensional icon menu limits the intuitive and flexibility of HGI. Therefore, there is a growing interest in implementing 3D gesture recognition vision systems in which optical sensors capture real-time video of the user and ubiquitous algorithms are then used to determine what the user's gestures are, without the user having to hold any device. We will demonstrate a hand-free motion interaction application based on computer vision technology on google glass. Presented application allows user to perform touch-less interaction by hand or foot gesture in front of the camera of google glass. Based on the same core ubiquitous gestures recognition algorithm as used in this demonstration, a hybrid wearable smartphone system based on mixed hardware and software has been presented in our previous work [Lv 2013][Lu et al. 2013][Lv et al. 2013], which can support either hand or foot interaction with today' smartphone.\",\"PeriodicalId\":114416,\"journal\":{\"name\":\"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"93\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2669062.2669066\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2669062.2669066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 93
摘要
人们对创造可穿戴设备交互技术越来越感兴趣。新兴的用户界面技术(如眼球跟踪、语音识别、手势识别、心电图、脑电图及其融合)有可能显著影响PC、智能手机、平板电脑和最新的可穿戴设备(如谷歌眼镜)的市场份额。因此,在智能手机和可穿戴设备等设备中取代这些技术是具有挑战性的。谷歌眼镜有许多令人印象深刻的特性(如语音操作、头部唤醒、眨眼检测),这些都是人机界面(HGI)技术。谷歌眼镜将不再满足“遮挡问题”和“胖手指问题”,这是触摸屏上直接触摸手指输入的问题。然而,谷歌眼镜只提供了一个触控板,包括简单的“点击和滑动手指”手势的触觉,实际上是一种一维的互动,而不是传统的基于智能手机完整触摸屏的二维互动。用一排“卡片”取代传统的二维图标菜单的一维“滑动触摸板”交互限制了HGI的直观性和灵活性。因此,人们对实现3D手势识别视觉系统越来越感兴趣,在该系统中,光学传感器捕获用户的实时视频,然后使用无处不在的算法来确定用户的手势是什么,而无需用户持有任何设备。我们将在谷歌眼镜上演示一个基于计算机视觉技术的免提运动交互应用。所展示的应用程序允许用户在谷歌眼镜的摄像头前通过手或脚的手势进行非触摸交互。基于本演示中使用的相同核心泛在手势识别算法,我们在之前的工作中提出了一种基于混合硬件和软件的混合可穿戴智能手机系统[Lv 2013][Lu et al. 2013][Lv et al. 2013],该系统可以支持手或脚与当今智能手机的交互。
There is an increasing interest in creating wearable device interaction technologies. Novel emerging user interface technologies (e.g. eye-ball tracking, speech recognition, gesture recognition, ECG, EEG and fusion of them) have the potential to significantly affect market share in PC, smartphones, tablets and latest wearable devices such as google glass. As a result, displacing these technologies in devices such as smart phones and wearable devices is challenging. Google glass has many impressive characteristics (i.e. voice actions, head wake up, wink detection), which are human-glass interface (HGI) technologies. Google glass won't meet the 'the occlusion problem' and 'the fat finger problem' any more, which are the problems of direct-touch finger input on touch screen. However, google glass only provides a touchpad that includes haptics with simple 'tapping and sliding your finger' gestures which is a one-dimensional interaction in fact, instead of the traditional two-dimensional interaction based on the complete touch screen of smartphone. The one-dimensional 'swipe the touchpad' interaction with a row of 'Cards' which replace traditional two-dimensional icon menu limits the intuitive and flexibility of HGI. Therefore, there is a growing interest in implementing 3D gesture recognition vision systems in which optical sensors capture real-time video of the user and ubiquitous algorithms are then used to determine what the user's gestures are, without the user having to hold any device. We will demonstrate a hand-free motion interaction application based on computer vision technology on google glass. Presented application allows user to perform touch-less interaction by hand or foot gesture in front of the camera of google glass. Based on the same core ubiquitous gestures recognition algorithm as used in this demonstration, a hybrid wearable smartphone system based on mixed hardware and software has been presented in our previous work [Lv 2013][Lu et al. 2013][Lv et al. 2013], which can support either hand or foot interaction with today' smartphone.