{"title":"Test time feature ordering with FOCUS: interactive predictions with minimal user burden","authors":"Kirstin Early, S. Fienberg, Jennifer Mankoff","doi":"10.1145/2971648.2971748","DOIUrl":"https://doi.org/10.1145/2971648.2971748","url":null,"abstract":"Predictive algorithms are a critical part of the ubiquitous computing vision, enabling appropriate action on behalf of users. A common class of algorithms, which has seen uptake in ubiquitous computing, is supervised machine learning algorithms. Such algorithms are trained to make predictions based on a set of features (selected at training time). However, features needed at prediction time (such as mobile information that impacts battery life, or information collected from users via experience sampling) may be costly to collect. In addition, both cost and value of a feature may change dynamically based on real-world context (such as battery life or user location) and prediction context (what features are already known, and what their values are). We contribute a framework for dynamically trading off feature cost against prediction quality at prediction time. We demonstrate this work in the context of three prediction tasks: providing prospective tenants estimates for energy costs in potential homes, estimating momentary stress levels from both sensed and user-provided mobile data, and classifying images to facilitate opportunistic device interactions. Our results show that while our approach to cost-sensitive feature selection is up to 45% less costly than competing approaches, error rates are equivalent or better.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114842912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hong Li, Wei Yang, Jianxin Wang, Yang Xu, Liusheng Huang
{"title":"WiFinger: talk to your smart devices with finger-grained gesture","authors":"Hong Li, Wei Yang, Jianxin Wang, Yang Xu, Liusheng Huang","doi":"10.1145/2971648.2971738","DOIUrl":"https://doi.org/10.1145/2971648.2971738","url":null,"abstract":"In recent literatures, WiFi signals have been widely used to \"sense\" people's locations and activities. Researchers have exploited the characteristics of wireless signals to \"hear\" people's talk and \"see\" keystrokes by human users. Inspired by the excellent work of relevant scholars, we turn to explore the field of human-computer interaction using finger-grained gestures under WiFi environment. In this paper, we present Wi-Finger - the first solution using ubiquitous wireless signals to achieve number text input in WiFi devices. We implement a prototype of WiFinger on a commercial Wi-Fi infrastructure. Our scheme is based on the key intuition that while performing a certain gesture, the fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time series of Channel State Information (CSI) values. WiFinger is deigned to recognize a set of finger-grained gestures, which are further used to realize continuous text input in off-the-shelf WiFi devices. As the results show, WiFinger achieves up to 90.4% average classification accuracy for recognizing 9 digits finger-grained gestures from American Sign Language (ASL), and its average accuracy for single individual number text input in desktop reaches 82.67% within 90 digits.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115517144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Wang, M. Aung, Saeed Abdullah, R. Brian, A. Campbell, Tanzeem Choudhury, M. Hauser, J. Kane, Michael Merrill, E. Scherer, V. W. Tseng, Dror Ben-Zeev
{"title":"CrossCheck: toward passive sensing and detection of mental health changes in people with schizophrenia","authors":"Rui Wang, M. Aung, Saeed Abdullah, R. Brian, A. Campbell, Tanzeem Choudhury, M. Hauser, J. Kane, Michael Merrill, E. Scherer, V. W. Tseng, Dror Ben-Zeev","doi":"10.1145/2971648.2971740","DOIUrl":"https://doi.org/10.1145/2971648.2971740","url":null,"abstract":"Early detection of mental health changes in individuals with serious mental illness is critical for effective intervention. CrossCheck is the first step towards the passive monitoring of mental health indicators in patients with schizophrenia and paves the way towards relapse prediction and early intervention. In this paper, we present initial results from an ongoing randomized control trial, where passive smartphone sensor data is collected from 21 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-8.5 months. Our results indicate that there are statistically significant associations between automatically tracked behavioral features related to sleep, mobility, conversations, smart-phone usage and self-reported indicators of mental health in schizophrenia. Using these features we build inference models capable of accurately predicting aggregated scores of mental health indicators in schizophrenia with a mean error of 7.6% of the score range. Finally, we discuss results on the level of personalization that is needed to account for the known variations within people. We show that by leveraging knowledge from a population with schizophrenia, it is possible to train accurate personalized models that require fewer individual-specific data to quickly adapt to new users.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115628936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is only one gps position sufficient to locate you to the road network accurately?","authors":"Hao-qing Wu, Weiwei Sun, Baihua Zheng","doi":"10.1145/2971648.2971702","DOIUrl":"https://doi.org/10.1145/2971648.2971702","url":null,"abstract":"Locating only one GPS position to a road segment accurately is crucial to many location-based services such as mobile taxi-hailing service, geo-tagging, POI check-in, etc. This problem is challenging because of errors including the GPS errors and the digital map errors (misalignment and the same representation of bidirectional roads) and a lack of context information. To the best of our knowledge, no existing work studies this problem directly and the work to reduce GPS signal errors by considering hardware aspect is the most relevant. Consequently, this work is the first attempt to solve the problem of locating one GPS position to a road segment. We study the problem in a data-driven view to make this process ubiquitous by proposing a tractable, efficient and robust generative model. In addition, we extend our solution to the real application scenario, i.e., taxi-hailing service, and propose an approach to further improve the result accuracy by considering destination information. We use the real taxi GPS data to evaluate our approach. The results show that our approach outperforms all the existing approaches significantly while maintaining robustness, and it can achieve an accuracy as high as 90% in some situations.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126579220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhang Zhao, S. Szpiro, J. Knighten, Shiri Azenkot
{"title":"CueSee: exploring visual cues for people with low vision to facilitate a visual search task","authors":"Yuhang Zhao, S. Szpiro, J. Knighten, Shiri Azenkot","doi":"10.1145/2971648.2971730","DOIUrl":"https://doi.org/10.1145/2971648.2971730","url":null,"abstract":"Visual search is a major challenge for low vision people. Conventional vision enhancements like magnification help low vision people see more details, but cannot indicate the location of a target in a visual search task. In this paper, we explore visual cues---a new approach to facilitate visual search tasks for low vision people. We focus on product search and present CueSee, an augmented reality application on a head-mounted display (HMD) that facilitates product search by recognizing the product automatically and using visual cues to direct the user's attention to the product. We designed five visual cues that users can combine to suit their visual condition. We evaluated the visual cues with 12 low vision participants and found that participants preferred using our cues to conventional enhancements for product search. We also found that CueSee outperformed participants' best-corrected vision in both time and accuracy.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy of interpreting pointing gestures in egocentric view","authors":"Deepak Akkil, Poika Isokoski","doi":"10.1145/2971648.2971687","DOIUrl":"https://doi.org/10.1145/2971648.2971687","url":null,"abstract":"Communicating spatial information by pointing is ubiquitous in human interactions. With the growing use of head-mounted cameras for collaborative purposes, it is important to assess how accurately viewers of the resulting egocentric videos can interpret pointing acts. We conducted an experiment to compare the accuracy of interpreting four different pointing techniques: hand pointing, head pointing, gaze pointing and hand+gaze pointing. Our results suggest that superimposing the gaze information on the egocentric video can enable viewers to determine pointing targets more accurately and more confidently. Hand pointing performed best when the pointing target was straight ahead and head pointing was the least preferred in terms of ease of interpretation. Our results can inform the design of collaborative applications that make use of the egocentric view.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"22 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125842474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finding a store, searching for a product: a study of daily challenges of low vision people","authors":"S. Szpiro, Yuhang Zhao, Shiri Azenkot","doi":"10.1145/2971648.2971723","DOIUrl":"https://doi.org/10.1145/2971648.2971723","url":null,"abstract":"Visual impairments encompass a range of visual abilities. People with low vision have functional vision and thus their experiences are likely to be different from people with no vision. We sought to answer two research questions: (1) what challenges do low vision people face when performing daily activities and (2) what aids (high- and low-tech) do low vision people use to alleviate these challenges? Our goal was to reveal gaps in current technologies that can be addressed by the UbiComp community. Using contextual inquiry, we observed 11 low vision people perform a wayfinding and shopping task in an unfamiliar environment. The task involved wayfinding and searching and purchasing a product. We found that, although there are low vision aids on the market, participants mostly used their smartphones, despite interface accessibility challenges. While smartphones helped them outdoors, participants were overwhelmed and frustrated when shopping in a store. We discuss the inadequacies of existing aids and highlight the need for systems that enhance visual information, rather than convert it to audio or tactile.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129061687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanna Sarsenbayeva, Jorge Gonçalves, F. García-Peñalvo, Simon Klakegg, S. Rissanen, H. Rintamäki, J. Hannu, V. Kostakos
{"title":"Situational impairments to mobile interaction in cold environments","authors":"Zhanna Sarsenbayeva, Jorge Gonçalves, F. García-Peñalvo, Simon Klakegg, S. Rissanen, H. Rintamäki, J. Hannu, V. Kostakos","doi":"10.1145/2971648.2971734","DOIUrl":"https://doi.org/10.1145/2971648.2971734","url":null,"abstract":"We evaluate the situational impairments caused by cold ambient temperature on fine-motor movement and vigilance during mobile interaction. For this purpose, we tested two mobile phone applications that measure fine motor skills and vigilance in controlled temperature settings. Our results show that cold adversely affected participants' fine-motor skills performance, but not vigilance. Based on our results we highlight the importance of correcting measurements when investigating performance of cognitive tasks to take into account the physical element of the tasks. Finally, we identify a number of design recommendations from literature that can mitigate the adverse effect of cold ambiance on interaction with mobile devices.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128884577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A contextual collaborative approach for app usage forecasting","authors":"Yingzi Wang, Nicholas Jing Yuan, Yu Sun, Fuzheng Zhang, Xing Xie, Qi Liu, Enhong Chen","doi":"10.1145/2971648.2971729","DOIUrl":"https://doi.org/10.1145/2971648.2971729","url":null,"abstract":"Fine-grained long-term forecasting enables many emerging recommendation applications such as forecasting the usage amounts of various apps to guide future investments, and forecasting users' seasonal demands for a certain commodity to find potential repeat buyers. For these applications, there often exists certain homogeneity in terms of similar users and items (e.g., apps), which also correlates with various contexts like users' spatial movements and physical environments. Most existing works only focus on predicting the upcoming situation such as the next used app or next online purchase, without considering the long-term temporal co-evolution of items and contexts and the homogeneity among all dimensions. In this paper, we propose a contextual collaborative forecasting (CCF) model to address the above issues. The model integrates contextual collaborative filtering with time series analysis, and simultaneously captures various components of temporal patterns, including trend, seasonality, and stationarity. The approach models the temporal homogeneity of similar users, items, and contexts. We evaluate the model on a large real-world app usage dataset, which validates that CCF outperforms state-of-the-art methods in terms of both accuracy and efficiency for long-term app usage forecasting.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114782083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ARTcode: preserve art and code in any image","authors":"Zhe Yang, Yuting Bao, Chuhao Luo, Xingya Zhao, Siyu Zhu, Chunyi Peng, Yunxin Liu, Xinbing Wang","doi":"10.1145/2971648.2971733","DOIUrl":"https://doi.org/10.1145/2971648.2971733","url":null,"abstract":"The ubiquitous QR codes and some similar barcodes are becoming a convenient and popular approach to impromptu communication between mobile devices and their surrounding cyber-physical world. However, such codes suffer from two common drawbacks: poor viewing experience and inability to be identified through itself. In this work, we propose ART-code-- Adaptive Robust doT matrix barcode, which aims to preserve ART and CODE features in one visual pattern. It works on any surface (paper or electronic displays) and is able to convert any image or any form of human-readable contents (e.g., a picture, a logo, a slogan) into an ARTcode. It looks like an image which retains human-readable and aesthetically pleasant contents, and in the meanwhile, it acts as a QR code which conveys data bits over the visual channel. The core enablers in ARTcode are (1) the design of the colored dot matrix for data embedding with little distortion from the original image and (2) a comprehensive error correction scheme which enhances decoding robustness against noises and interferences from the original image in ARTcode. We implement ARTcode with the receiver on Android phones and the sender from a PC or a phone (it can be printed in paper). We conduct extensive user survey and experiments for evaluation. It validates the effectiveness and wide applicability of ARTcode: It works well with all of 197 images randomly downloaded, covering representative categories of the gray-scale images, logos, colored ones with low/medium/strong contrasts. The image quality is quite acceptable in a subjective user-perception survey with 50 participants and data communication accuracy achieves as high as 99% in almost all the cases (> 96% raw accuracy in ARTcode without error detection and other schemes).","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132751962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}