{"title":"Indoor collocation: exploring the ultralocal context","authors":"Astrid Bicamumpaka Shema, Yun Huang","doi":"10.1145/2957265.2962653","DOIUrl":"https://doi.org/10.1145/2957265.2962653","url":null,"abstract":"Prior research has investigated how to improve awareness of collocation at neighborhood scales or citywide through the use of smartphone apps. The availability of indoor maps and more accurate indoor navigation technologies motivate us to investigate the concept of collocation in the context of indoor settings. In this paper, we introduce the notion of ultralocality, which involves people and various kinds of resources collocated in an indoor environment. We present our interview study and initial results that help us understand how people perceive collocation in an ultralocal enviornment. We also introduce a mobile application that helps people explore an ultralocal environment in the college campus buildings.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122878158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Ghiani, Marco Manca, F. Paternò, C. Santoro
{"title":"End-user personalization of context-dependent applications in AAL scenarios","authors":"Giuseppe Ghiani, Marco Manca, F. Paternò, C. Santoro","doi":"10.1145/2957265.2965005","DOIUrl":"https://doi.org/10.1145/2957265.2965005","url":null,"abstract":"The design and development of flexible applications able to match the many possible user needs and provide high quality user experience is still a major issue. In ambient-assisted living scenarios there is the need of giving adequate support to elderly so that they can independently live at home. Thus, providing personalized assistance is particularly critical because ageing people often have different ranges of individual needs, requirements and disabilities. In this position paper we introduce a solution based on an End-User Development environment that allows patients and caregivers to tailor the context-dependent behaviour of their Web applications in order to facilitate patients' life. This is done through the specification of trigger-action rules to support application customization.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125218965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The importance of visual attention for adaptive interfaces","authors":"F. Göbel, I. Giannopoulos, M. Raubal","doi":"10.1145/2957265.2962659","DOIUrl":"https://doi.org/10.1145/2957265.2962659","url":null,"abstract":"Efficient user interfaces help their users to accomplish their tasks by adapting to their current needs. The processes involved before and during interface adaptation are complex and crucial for the success and acceptance of a user interface. In this work we identify these processes and propose a framework that demonstrates the benefits that can be gained by utilizing the user's visual attention in the context of adaptive cartographic maps.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121168561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classifying weight training workouts with deep convolutional neural networks: a precedent study","authors":"Jaehyun Park","doi":"10.1145/2957265.2961861","DOIUrl":"https://doi.org/10.1145/2957265.2961861","url":null,"abstract":"In recent years, deep learning algorithms have been widely used in both academic research and practical applications. This study uses a deep convolutional neural network to analyze and predict physical movements. We evaluated the effectiveness of our proposed network by recruiting a professional fitness trainer and let the trainer wear a smart watch equipped with an accelerometer capable of assessing physical movement. The results confirmed the ability of the network to correctly predict the bench press, dips, squat, deadlift, and military press with an accuracy rate of 92.8%. This preliminary study has several limitations such as a low sample size and the lack of a specified network layer. In subsequent studies we plan to address these limitations by extending our investigation to include the analysis of diverse movements.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"G3: bootstrapping stroke gestures design with synthetic samples and built-in recognizers","authors":"Daniel Martín-Albo, Luis A. Leiva","doi":"10.1145/2957265.2961833","DOIUrl":"https://doi.org/10.1145/2957265.2961833","url":null,"abstract":"Stroke gestures are becoming increasingly important with the ongoing success of touchscreen-capable devices. However, training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. Furthermore, recruiting participants, data collection and labeling, etc. necessary for achieving this goal are usually time-consuming and expensive. In response to this need, we introduce G3, a mobile-first web application for bootstrapping unistroke, multistroke, or multitouch gestures. The user only has to provide a gesture example once, and G3 will create a kinematic model of that gesture. Then, by introducing local and global perturbations to the model parameters, G3 will generate any number of synthetic human-like samples. In addition, the user can get a gesture recognizer together with the synthesized data. As such, the outcome of G3 can be directly incorporated into production-ready applications.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126845629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"mSTROKE: a crowd-powered mobility towards stroke recognition","authors":"Richa Tibrewal, Ankita Singh, M. Bhattacharyya","doi":"10.1145/2957265.2961831","DOIUrl":"https://doi.org/10.1145/2957265.2961831","url":null,"abstract":"We demonstrate a crowd-powered model for the early diagnosis of stroke using a mobile device. The simple approach consists of monitoring the subject's health in three simple steps including the smile test for facial weakness, raising hands test for arm weakness and speech test for slurring of speech. Our demonstrated system shows a performance accuracy of 87.5% over a total number of 40 test cases.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116543982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Rigby, Duncan P. Brumby, A. Cox, Sandy J. J. Gould
{"title":"Watching movies on netflix: investigating the effect of screen size on viewer immersion","authors":"J. Rigby, Duncan P. Brumby, A. Cox, Sandy J. J. Gould","doi":"10.1145/2957265.2961843","DOIUrl":"https://doi.org/10.1145/2957265.2961843","url":null,"abstract":"Film and television content is moving out of the living room and onto mobile devices - viewers are now watching when and where it suits them, on devices of differing sizes. This freedom is convenient, but could lead to differing experiences across devices. Larger screens are often believed to be favourable, e.g. to watch films or sporting events. This is partially supported in the literature, which shows that larger screens lead to greater presence and more intense physiological responses. However, a more broadly-defined measure of experience, such as that of immersion from computer games research, has not been studied. In this study, 19 participants watched content on three different screens and reported their immersion level via questionnaire. Results showed that the 4.5-inch phone screen elicited lower immersion scores when compared to the 13-inch laptop and 30-inch monitor, but there was no difference when comparing the two larger screens. This suggests that very small screens lead to reduced immersion, but after a certain size the effect is less pronounced.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134162251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimal sequential gaze models for inferring walkers' tasks","authors":"C. Rothkopf","doi":"10.1145/2957265.2965015","DOIUrl":"https://doi.org/10.1145/2957265.2965015","url":null,"abstract":"Eye movements in extended sequential behavior are known to reflect task demands much more than low-level feature saliency. However, the more naturalistic the task is the more difficult it becomes to establish what cognitive processes a particular task elicits moment by moment. Here we ask the question, which sequential model is required to capture gaze sequences so that the ongoing task can be inferred reliably. Specifically, we consider eye movements of human subjects navigating a walkway while avoiding obstacles and approaching targets in a virtual environment. We show that Hidden-Markov Models, which have been used extensively in modeling human sequential behavior, can be augmented with few state variables describing the egocentric position of subjects relative to objects in the environment to dramatically increase successful classification of the ongoing task and to generate gaze sequences, that are very close to those observed in human subjects.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128763077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wearable head-mounted 3D tactile display application scenarios","authors":"Oliver Beren Kaul, M. Rohs","doi":"10.1145/2957265.2965022","DOIUrl":"https://doi.org/10.1145/2957265.2965022","url":null,"abstract":"Current generation virtual reality (VR) and augmented reality (AR) head-mounted displays (HMDs) usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. In a previous work, we presented HapticHead, a potentially mobile system utilizing vibration motors distributed in three concentric ellipses around the head to give intuitive haptic guidance hints and to increase immersion for VR and AR applications. The purpose of this paper is to explore potential application scenarios and aesthetic possibilities of the proposed concept in order to create an active discussion amongst workshop participants.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130263395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Ardissono, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Angioletta Voghera, L. Riccia
{"title":"Exploration of cultural heritage information via textual search queries","authors":"L. Ardissono, M. Lucenteforte, Noemi Mauro, Adriano Savoca, Angioletta Voghera, L. Riccia","doi":"10.1145/2957265.2962648","DOIUrl":"https://doi.org/10.1145/2957265.2962648","url":null,"abstract":"Searching information in a Geographical Information System (GIS) usually imposes that users explore precompiled category catalogs and select the types of information they are looking for. Unfortunately, that approach is challenging because it forces people to adhere to a conceptualization of the information space that might be different from their own. In order to address this issue, we propose to support textual search as the basic interaction model, exploiting linguistic information, together with category exploration, for query interpretation and expansion. This paper describes our model and its adoption in the OnToMap Participatory GIS.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}