Lei Jing, Zixue Cheng, Yinghui Zhou, Junbo Wang, Tongjun Huang
{"title":"Magic Ring: a self-contained gesture input device on finger","authors":"Lei Jing, Zixue Cheng, Yinghui Zhou, Junbo Wang, Tongjun Huang","doi":"10.1145/2541831.2541875","DOIUrl":"https://doi.org/10.1145/2541831.2541875","url":null,"abstract":"Control and Communication in the computing environment with diverse equipment could be clumsy, obtrusive, and frustrating even just for finding the right input device or getting familiar with the input interface. In this paper, we present Magic Ring (MR), a finger ring shape input device using inertial sensor to detect the subtle finger gestures and routine daily activities. As a self-contained, always-available, and hands-free input device, we believe that MR will enable diverse applications in the intelligent computing environment. In this demonstration, we will show a prototype design of MR and three proof-of-concept application systems: a remote controller to control the electrical appliance like TV, radio, and lamp using simple finger gestures; a natural communication tools to chat using the simplified sign languages; a daily activity tracker to record daily activities such as room cleaning, eating, cooking, writing with only one MR on the index finger.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123158262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhihan Lv, A. Halawani, Muhammad Sikandar Lal Khan, S. Réhman, Haibo Li
{"title":"Finger in air: touch-less interaction on smartphone","authors":"Zhihan Lv, A. Halawani, Muhammad Sikandar Lal Khan, S. Réhman, Haibo Li","doi":"10.1145/2541831.2541833","DOIUrl":"https://doi.org/10.1145/2541831.2541833","url":null,"abstract":"In this paper we present a vision based intuitive interaction method for smart mobile devices. It is based on markerless finger gesture detection which attempts to provide a 'natural user interface'. There is no additional hardware necessary for real-time finger gesture estimation. To evaluate the strengths and effectiveness of proposed method, we design two smart phone applications, namely circle menu application - provides user with graphics and smart phone's status information, and bouncing ball game- a finger gesture based bouncing ball application. The users interact with these applications using finger gestures through the smart phone's camera view, which trigger the interaction event and generate activity sequences for interactive buffers. Our preliminary user study evaluation demonstrates effectiveness and the social acceptability of proposed interaction approach.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117302049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AR typing interface for mobile devices","authors":"Masakazu Higuchi, T. Komuro","doi":"10.1145/2541831.2541847","DOIUrl":"https://doi.org/10.1145/2541831.2541847","url":null,"abstract":"We propose a new user interface system for mobile devices. By using augmented reality (AR) technology, the system overlays virtual objects on real images captured by a camera attached to the back of a mobile device, and the user can operate the mobile device by manipulating the virtual objects with his/her hand in the space behind the mobile device. This system allows the user to operate the device in a wide three-dimensional space and to select small objects easily. Also, the AR technology provides the user with a sense of reality in operating the device. We developed a typing application using our system and verified the effectiveness by user studies. The results showed that more than half of the subjects felt that the operation area of the proposed system is larger than that of a smartphone and that both AR and unfixed key-plane are effective for improving typing speed.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129440569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Rader, Clemens Holzmann, E. Rukzio, Julian Seifert
{"title":"MobiZone: personalized interaction with multiple items on interactive surfaces","authors":"M. Rader, Clemens Holzmann, E. Rukzio, Julian Seifert","doi":"10.1145/2541831.2541836","DOIUrl":"https://doi.org/10.1145/2541831.2541836","url":null,"abstract":"Current interactive surfaces do not support user identification. Hence, personalized applications that consider user-specific access control are not possible. Diverse approaches for identifying and distinguishing users have been investigated in previous research. Token-based approaches -- e.g., which utilize the user's mobile phone -- are especially promising, as they also allow for consideration of the user's personal digital context (e.g., stored messages, contacts, or media data). However, existing interaction techniques are limited regarding their ability to enable users to manipulate (e.g., select or copy) multiple items at the same time, as they are cumbersome when the number of files exceeds a certain amount. We present MobiZone, a technique that enables users to interact with large numbers of items on an interactive surface while enabling personalized access by using the mobile phone as a token. MobiZone provides a spatial zone that can be positioned, resized and associated with any action according to the user's needs; items enclosed by the zone can be manipulated simultaneously. We present three interaction techniques (FlashLight&Control, Remote&Control, and Place&Control) that enable users to control the zone. Additionally, we report the results of a comparative user study in which we compared the different interaction techniques for MobiZone. The results indicate that users are fastest with Remote&Control, and they also rated Remote&Control slightly higher than the other techniques.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"63 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120916847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dairazalia Sanchez-Cortes, Joan-Isaac Biel, Shiro Kumano, Junji Yamato, K. Otsuka, D. Gática-Pérez
{"title":"Inferring mood in ubiquitous conversational video","authors":"Dairazalia Sanchez-Cortes, Joan-Isaac Biel, Shiro Kumano, Junji Yamato, K. Otsuka, D. Gática-Pérez","doi":"10.1145/2541831.2541864","DOIUrl":"https://doi.org/10.1145/2541831.2541864","url":null,"abstract":"Conversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121428512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roland Buchner, Patricia M. Kluckner, A. Weiss, M. Tscheligi
{"title":"Assisting maintainers in the semiconductor factory: iterative co-design of a mobile interface and a situated display","authors":"Roland Buchner, Patricia M. Kluckner, A. Weiss, M. Tscheligi","doi":"10.1145/2541831.2541874","DOIUrl":"https://doi.org/10.1145/2541831.2541874","url":null,"abstract":"Maintaining machines in semiconductor factories is a challenging task that, so far, is not sufficiently supported by mobile interactive technology. This paper describes the early development of a maintainer support system. Our goal was to develop a user-experience prototype, which consists of a mobile and a situated interface, to support maintenance activities and the coordination between maintainers and shift-leads. The interfaces are meant to reduce the amount of information and to improve awareness for defective equipment. Efforts described in this paper include the development of a conceptual user experience prototype, following an iterative user-centered design approach. Based on the requirements analysis, an initial mock-up of both interfaces was developed and later on discussed with maintainers in a workshop. With an interactive Wizard of Oz (WOz) prototype we examined the cooperative aspect as well as user experience factors (e.g., distraction, trust, usability) in a simulated factory environment.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132845821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NoseTapping: what else can you do with your nose?","authors":"Ondrej Polácek, T. Grill, M. Tscheligi","doi":"10.1145/2541831.2541867","DOIUrl":"https://doi.org/10.1145/2541831.2541867","url":null,"abstract":"Touch-screen interfaces on smart devices became ubiquitous in our everyday lives. In specific contextual situations, capacitive touch interfaces used on current mobile devices are not accessible when, for example, wearing gloves during a cold winter. Although the market has responded by providing capacitive styluses or touchscreen-compatible gloves, these solutions are not widely accepted and appropriate in such particular situations. Using the nose instead of fingers is an easy way to overcome this problem. In this paper, we present in-depth results of a user study on nose-based interaction. The study was complemented by an online survey to elaborate the potential and acceptance of the nose-based interaction style. Based on the insights gained in the study, we identify the main challenges of nose-based interaction and contribute to the state of the art of design principles for this interaction style by adding two new design principles and refining one already existing principle. In addition, we investigated in the emotional effect of nose-based interaction based on the user experiences evolved during the user study.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131396618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zack Z. Zhu, Ulf Blanke, Alberto Calatroni, G. Tröster
{"title":"Human activity recognition using social media data","authors":"Zack Z. Zhu, Ulf Blanke, Alberto Calatroni, G. Tröster","doi":"10.1145/2541831.2541852","DOIUrl":"https://doi.org/10.1145/2541831.2541852","url":null,"abstract":"Human activity recognition is a core component of context-aware, ubiquitous computing systems. Traditionally, this task is accomplished by analyzing signals of wearable motion sensors. While such signals can effectively distinguish various low-level activities (e.g. walking or standing), two issues exist: First, high-level activities (e.g. watching movies or attending lectures) are difficult to distinguish from motion data alone. Second, instrumentation of complex body sensor network at population scale is impractical. In this work, we take an alternative approach of leveraging rich, dynamic, and crowd-generated self-report data as the basis for in-situ activity recognition. By treating the user as the \"sensor\", we make use of implicit signals emitted from natural use of mobile smart-phones. Applying an L1-regularized Linear SVM on features derived from textual content, semantic location, and time, we are able to infer 10 meaningful classes of daily life activities with a mean accuracy of up to 83.9%. Our work illustrates a promising first step towards comprehensive, high-level activity recognition using free, crowd-generated, social media data.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132132845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederic Kerber, Pascal Lessel, Michael Mauderer, Florian Daiber, Antti Oulasvirta, A. Krüger
{"title":"Is autostereoscopy useful for handheld AR?","authors":"Frederic Kerber, Pascal Lessel, Michael Mauderer, Florian Daiber, Antti Oulasvirta, A. Krüger","doi":"10.1145/2541831.2541851","DOIUrl":"https://doi.org/10.1145/2541831.2541851","url":null,"abstract":"Some recent mobile devices have autostereoscopic displays that enable users to perceive stereoscopic 3D without lenses or filters. This might be used to improve depth discrimination of objects overlaid to a camera viewfinder in augmented reality (AR). However, it is not known if autostereoscopy is useful in the viewing conditions typical to mobile AR. This paper investigates the use of autostereoscopic displays in an psychophysical experiment with twelve participants using a state-of-the-art commercial device. The main finding is that stereoscopy has a negligible if any effect on a small screen, even in favorable viewing conditions. Instead, the traditional depth cues, in particular object size, drive depth discrimination.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121987188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of hybrid front- and back-of-device interaction on mobile devices","authors":"Markus Löchtefeld, Christoph Hirtz, Sven Gehring","doi":"10.1145/2541831.2541865","DOIUrl":"https://doi.org/10.1145/2541831.2541865","url":null,"abstract":"With the recent trend of increasing display sizes of mobile devices, one-handed interaction has become increasingly difficult when a user wants to maintain a safe grip around the device at the same time. In this paper we evaluate how a combination of hybrid front- and back-of-device touch input can be used to overcome the difficulties when using a mobile device with one hand. Our evaluation shows that, even though such a technique is slower than conventional front-of-device input, it allows for accurate and safe input.","PeriodicalId":286368,"journal":{"name":"Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130104227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}