{"title":"Demo: Multi-device Gestural Interfaces","authors":"Vu H. Tran, Youngki Lee, Archan Misra","doi":"10.1145/2938559.2938574","DOIUrl":null,"url":null,"abstract":"Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MobiSys '16 Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2938559.2938574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures