Wenjie Ruan, Quan Z. Sheng, Lei Yang, Tao Gu, Peipei Xu, Longfei Shangguan
{"title":"AudioGest: enabling fine-grained hand gesture detection by decoding echo signal","authors":"Wenjie Ruan, Quan Z. Sheng, Lei Yang, Tao Gu, Peipei Xu, Longfei Shangguan","doi":"10.1145/2971648.2971736","DOIUrl":"https://doi.org/10.1145/2971648.2971736","url":null,"abstract":"Hand gesture is becoming an increasingly popular means of interacting with consumer electronic devices, such as mobile phones, tablets and laptops. In this paper, we present AudioGest, a device-free gesture recognition system that can accurately sense the hand in-air movement around user's devices. Compared to the state-of-the-art, AudioGest is superior in using only one pair of built-in speaker and microphone, without any extra hardware or infrastructure support and with no training, to achieve fine-grained hand detection. Our system is able to accurately recognize various hand gestures, estimate the hand in-air time, as well as average moving speed and waving range. We achieve this by transforming the device into an active sonar system that transmits inaudible audio signal and decodes the echoes of hand at its microphone. We address various challenges including cleaning the noisy reflected sound signal, interpreting the echo spectrogram into hand gestures, decoding the Doppler frequency shifts into the hand waving speed and range, as well as being robust to the environmental motion and signal drifting. We implement the proof-of-concept prototype in three different electronic devices and extensively evaluate the system in four real-world scenarios using 3,900 hand gestures that collected by five users for more than two weeks. Our results show that AudioGest can detect six hand gestures with an accuracy up to 96%, and by distinguishing the gesture attributions, it can provide up to 162 control commands for various applications.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"47 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134608983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collective use of a fabric-based interactive surface to support early development in toddler classrooms","authors":"Franceli Linney Cibrian, Nadir Weibel, M. Tentori","doi":"10.1145/2971648.2971695","DOIUrl":"https://doi.org/10.1145/2971648.2971695","url":null,"abstract":"Early instruction plays a crucial role in allowing toddlers to develop social, cognitive, and sensory-motor skills. Free play is important in any early development program, but designing activities for free play is challenging. In this paper, we investigate the use of an interactive surface, BendableSound, a fabric-based interactive surface that enables young children to play piano sounds when touching the fabric, and its potential value in early education classrooms. We conducted a 9-week exploratory study in which 22 toddlers and 5 teachers used BendableSound during free play activities inside their classroom. Our qualitative results indicate that BendableSound was successfully adopted and integrated in toddler classrooms and could positively impact cognitive, social, and physical development. These results offer implications for the design of deformable surfaces and for their integration in activities to support the early development of toddlers.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LumiO: a plaque-aware toothbrush","authors":"T. Yoshitani, Masa Ogata, K. Yatani","doi":"10.1145/2971648.2971704","DOIUrl":"https://doi.org/10.1145/2971648.2971704","url":null,"abstract":"Toothbrushing plays an important role in daily dental plaque removal for preventive dentistry. Prior work has investigated improvements on toothbrushing with sensing technologies. But existing toothbrushing support focuses mostly on estimating brushing coverage. Users thus only have indirect information about how well their toothbrushing removes dental plaque. We present LumiO, a toothbrush that offers users continuous feedback on the amount of plaque on teeth. Lumio uses a well-known method for plaque detection, called Quantitative Light-induced Fluorescence (QLF). QLF exploits a red fluorescence property that bacterium in the plaque demonstrates when a blue-violet ray is cast. Blue-violet light excites this fluorescence property, and a camera with an optical filter can capture plaque in pink. We incorporate this technology into an electric toothbrush to achieve improvements in performance on plaque removal in daily dental care. This paper first discusses related work in sensing for oral activities and interaction as well as dental care with technologies. We then describe the principles of QLF, the hardware design of LumiO, and our vision-based plaque detection method. Our evaluations show that the vision-based plaque detection method with three machine learning techniques can achieve F-measures of 0.68 -- 0.92 under user-dependent training. Qualitative evidence also suggests that study participants were able to have improved awareness of plaque and build confidence on their toothbrushing.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125604789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"iKneeBraces: knee adduction moment evaluation measured by motion sensors in gait detection","authors":"Hsin-Ruey Tsai, Shih-Yao Wei, Jui-Chun Hsiao, Ting-Wei Chiu, Yi-Ping Lo, Chi-Feng Keng, Y. Hung, Jin-Jong Chen","doi":"10.1145/2971648.2971675","DOIUrl":"https://doi.org/10.1145/2971648.2971675","url":null,"abstract":"We propose light-weight wearable devices, iKneeBraces, to prevent knee osteoarthritis (OA) using knee adduction moment (KAM) evaluation. iKneeBrace consists of two inertial measurement units (IMUs) to measure shin and thigh angles. KAM is estimated by ground force reaction (GRF), knee position and center of pressure position. Instead of heavy and bulky 3DoF force plates conventionally used, we propose to build a 2D input regression model using shin and thigh angles from iKneeBrace as input to infer GRF direction and further estimate KAM. We perform an experiment to evaluate the method. The results show that iKneeBrace can infer KAM similar to the ground truth in the first peak, the most important part to prevent knee OA. Furthermore, the proposed method can infer KAM in all parts if better IMUs used in iKneeBrace in the future. The proposed method not only makes KAM evaluation portable but also requires only light-weight devices.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125628102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Mennicken, Oliver Zihler, Frida Juldaschewa, Veronika Molnar, David Aggeler, E. Huang
{"title":"\"It's like living with a friendly stranger\": perceptions of personality traits in a smart home","authors":"Sarah Mennicken, Oliver Zihler, Frida Juldaschewa, Veronika Molnar, David Aggeler, E. Huang","doi":"10.1145/2971648.2971757","DOIUrl":"https://doi.org/10.1145/2971648.2971757","url":null,"abstract":"Interacting with smart homes and Internet of Things devices is still far from being a seamless experience as there are often many different interfaces involved. Due to the improvements of speech recognition and synthesis, voice-based agents are becoming more common to give users a unified interface to different individual systems. These agents often exhibit human-like personality traits, such as responding in a humorous way or showing caring behavior in reminders. We are exploring this approach in the context of smart homes and home automation. Should a smart home have a proactive or passive personality? Should it try to socialize with inhabitants? What personality traits do people consider desirable or undesirable? To learn more about this design space, we created two variants of a usage scenario of a domestic routine in a smart home to demonstrate different personality trait combinations. Forty-one participants experienced the scenario and provided feedback about the designs. In this paper, we report findings about participants' preferences, how they responded to the proactive and social behavior our prototype demonstrated and implications for the design of agent-based interfaces in the home.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122447608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Lu, W. Ai, Xuanzhe Liu, Qian Li, Ning Wang, Gang Huang, Q. Mei
{"title":"Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users","authors":"Xuan Lu, W. Ai, Xuanzhe Liu, Qian Li, Ning Wang, Gang Huang, Q. Mei","doi":"10.1145/2971648.2971724","DOIUrl":"https://doi.org/10.1145/2971648.2971724","url":null,"abstract":"Emojis have been widely used to simplify emotional expression and enrich user experience. As an interesting practice of ubiquitous computing, emojis are adopted by Internet users from many different countries, on many devices (particularly popular on smartphones), and in many applications. The \"ubiquitous\" usage of emojis enables us to study and compare user behaviors and preferences across countries and cultures. We present an analysis on how smartphone users use emojis based on a very large data set collected from a popular emoji keyboard. The data set contains a complete month of emoji usage of 3.88 million active users from 212 countries and regions. We demonstrate that the categories and frequencies of emojis used by these users provide rich signals for the identification and the understanding of cultural differences of smartphone users. Users from different countries present significantly different preferences on emojis, which complies with the well-known Hofstede's cultural dimensions model.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132056079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enabling on-body transmissions with commodity devices","authors":"Mehrdad Hessar, Vikram Iyer, Shyamnath Gollakota","doi":"10.1145/2971648.2971682","DOIUrl":"https://doi.org/10.1145/2971648.2971682","url":null,"abstract":"We show for the first time that commodity devices can be used to generate wireless data transmissions that are confined to the human body. Specifically, we show that commodity input devices such as fingerprint sensors and touchpads can be used to transmit information to only wireless receivers that are in contact with the body. We characterize the propagation of the resulting transmissions across the whole body and run experiments with ten subjects to demonstrate that our approach generalizes across different body types and postures. We also evaluate our communication system in the presence of interference from other wearable devices such as smartwatches and nearby metallic surfaces. Finally, by modulating the operations of these input devices, we demonstrate bit rates of up to 50 bits per second over the human body.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132162478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GreenTouch: transparent energy management for cellular data radios","authors":"Shravan Aras, C. Gniady","doi":"10.1145/2971648.2971660","DOIUrl":"https://doi.org/10.1145/2971648.2971660","url":null,"abstract":"Smartphones come equipped with multiple radios for cellular data communication such as 4G LTE, 3G, and 2G, that offer different bandwidths and power profiles. 4G LTE offers the highest bandwidth and is desired by users as it offers quick response while browsing the Internet, streaming media, or utilizing numerous network aware applications available to users. However, majority of the time this high bandwidth level is unnecessary, and the bandwidth demand can be easily met by 3G radios at a reduced power level. While 2G radios demand even lower power, they do not offer adequate bandwidth to meet the demand of interactive applications; however, the 2G radio may be utilized to provide connectivity when the phone is in the standby mode. To address different demands for bandwidth, we propose GreenTouch, a system that dynamically adapts to the bandwidth demand and system state by switching between 4G LTE, 3G, and 2G with the goal of minimizing delays and maximizing energy efficiency. GreenTouch associates users' behavior to network activity through capturing and correlating user interactions with the touch display. We have used top applications on the Google play store to show the potential of GreenTouch to reduce energy consumption of the radios by 10%, on average, compared to running the applications in the standard Android. This translates to an overall energy savings of 7.5% for the entire smartphone.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126351201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quan Kong, T. Maekawa, Taiki Miyanishi, Takayuki Suyama
{"title":"Selecting home appliances with smart glass based on contextual information","authors":"Quan Kong, T. Maekawa, Taiki Miyanishi, Takayuki Suyama","doi":"10.1145/2971648.2971651","DOIUrl":"https://doi.org/10.1145/2971648.2971651","url":null,"abstract":"We propose a method for selecting home appliances using a smart glass, which facilitates the control of network-connected appliances in a smart house. Our proposed method is image-based appliance selection and enables smart glass users to easily select a particular appliance by just looking at it. The main feature of our method is that it achieves high precision appliance selection using user contextual information such as position and activity, inferred from various sensor data in addition to camera images captured by the glass because such contextual information is greatly related in the home appliance that a user wants to control in her daily life. We design a state-of-the-art appliance selection method by fusing image features extracted by deep learning techniques and context information estimated by non-parametric Bayesian techniques within a framework of multiple kernel learning. Our experimental results, which use sensor data obtained in an actual house equipped with many network-connected appliances, show the effectiveness of our method.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124802227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble","authors":"Haodong Guo, Ling Chen, Liangying Peng, Gencai Chen","doi":"10.1145/2971648.2971708","DOIUrl":"https://doi.org/10.1145/2971648.2971708","url":null,"abstract":"Effectively utilizing multimodal information (e.g., heart rate and acceleration) is a promising way to achieve wearable sensor based human activity recognition (HAR). In this paper, an activity recognition approach MARCEL (Multimodal Activity Recognition with Classifier Ensemble) is proposed, which exploits the diversity of base classifiers to construct a good ensemble for multimodal HAR, and the diversity measure is obtained from both labeled and unlabeled data. MARCEL uses neural network (NN) as base classifiers to construct the HAR model, and the diversity of classifier ensemble is embedded in the error function of the model. In each iteration, the error of the model is decomposed and back-propagated to base classifiers. To ensure the overall accuracy of the model, the weights of base classifiers are learnt in the classifier fusion process with sparse group lasso. Extensive experiments show that MARCEL is able to yield a competitive HAR performance, and has its superiority on exploiting multimodal signals.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114465123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}