{"title":"Speech and Hands-free interaction: myths, challenges, and opportunities","authors":"Cosmin Munteanu, Gerald Penn","doi":"10.1145/3098279.3119919","DOIUrl":"https://doi.org/10.1145/3098279.3119919","url":null,"abstract":"HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines - despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the MobileHCI community (and the HCI field at large) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness. The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122089485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll
{"title":"PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality","authors":"Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll","doi":"10.1145/3098279.3125439","DOIUrl":"https://doi.org/10.1145/3098279.3125439","url":null,"abstract":"Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117201133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque
{"title":"Prototyping sonic interaction for walking","authors":"Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque","doi":"10.1145/3098279.3122141","DOIUrl":"https://doi.org/10.1145/3098279.3122141","url":null,"abstract":"Sounds play a substantial role in the experience of movement activities such as walking. Drawing on the movement inducing effects of sound, sonic interaction opens up numerous possibilities to modify the walker's movements and experience. We argue that designing sonic interaction for movement activities demands an experiential awareness of the interplay of sound, body movement and use situation, and, propose a prototyping method to understand possibilities and challenges related to the design of mobile sonic interaction. In this paper, we present a rapid prototyping system that enables non-expert users to design sonic interaction for walking and to experience their design in the real-world context. We discuss the way this prototyping system allows designers to experience how their design ideas unfold in mobile use and affect the walking.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128434304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze
{"title":"Language learning on-the-go: opportune moments and design of mobile microlearning sessions","authors":"Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze","doi":"10.1145/3098279.3098565","DOIUrl":"https://doi.org/10.1145/3098279.3098565","url":null,"abstract":"Learning a foreign language is a daunting and time-consuming task. People often lack the time or motivation to sit down and engage with learning content on a regular basis. We present an investigation of microlearning sessions on mobile phones, in which we focus on session triggers, presentation methods, and user context. Therefore, we built an Android app that prompts users to review foreign language vocabulary directly through notifications or through app usage across the day. We present results from a controlled and an in-the-wild study, in which we explore engagement and user context. In-app sessions lasted longer, but notifications added a significant number of \"quick\" learning sessions. 37.6% of sessions were completed in transit, hence learning-on-the-go was well received. Neither the use of boredom as trigger nor the presentation (flashcard and multiple-choice) had a significant effect. We conclude with implications for the design of mobile microlearning applications with context-awareness.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130573573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De
{"title":"TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection","authors":"Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De","doi":"10.1145/3098279.3098564","DOIUrl":"https://doi.org/10.1145/3098279.3098564","url":null,"abstract":"Typing based communication applications on smartphones, like WhatsApp, can induce emotional exchanges. The effects of an emotion in one session of communication can persist across sessions. In this work, we attempt automatic emotion detection by jointly modeling the typing characteristics, and the persistence of emotion. Typing characteristics, like speed, number of mistakes, special characters used, are inferred from typing sessions. Self reports recording emotion states after typing sessions capture persistence of emotion. We use this data to train a personalized machine learning model for multi-state emotion classification. We implemented an Android based smartphone application, called TapSense, that records typing related metadata, and uses a carefully designed Experience Sampling Method (ESM) to collect emotion self reports. We are able to classify four emotion states - happy, sad, stressed, and relaxed, with an average accuracy (AUCROC) of 84% for a group of 22 participants who installed and used TapSense for 3 weeks.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"6 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133007536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving software-reduced touchscreen latency","authors":"N. Henze, Sven Mayer, Huy Viet Le, V. Schwind","doi":"10.1145/3098279.3122150","DOIUrl":"https://doi.org/10.1145/3098279.3122150","url":null,"abstract":"The latency of current mobile devices' touchscreens is around 100ms and has widely been explored. Latency down to 2ms is noticeable, and latency as low as 25ms reduces users' performance. Previous work reduced touch latency by extrapolating a finger's movement using an ensemble of shallow neural networks and showed that predicting 33ms into the future increases users' performance. Unfortunately, this prediction has a high error. Predicting beyond 33ms did not increase participants' performance, and the error affected the subjective assessment. We use more recent machine learning techniques to reduce the prediction error. We train LSTM networks and multilayer perceptrons using a large data set and regularization. We show that linear extrapolation causes an 116.7% higher error and the previously proposed ensembles of shallow networks cause a 26.7% higher error compared to the LSTM networks. The trained models, the data used for testing, and the source code is available on GitHub.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat
{"title":"CETA: open, affordable and portable mixed-reality environment for low-cost tablets","authors":"Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat","doi":"10.1145/3098279.3125435","DOIUrl":"https://doi.org/10.1145/3098279.3125435","url":null,"abstract":"Mixed-reality environments allow to combine tangible interaction with digital feedback, empowering interaction designers to take benefits from both real and virtual worlds. This interaction paradigm is also being applied in classrooms for learning purposes. However, most of the times the devices supporting mixed-reality interaction are neither portable nor affordable, which could be a limitation in the learning context. In this paper we propose CETA, a mixed-reality environment using low-cost Android tablets which tackles portability and costs issues. In addition, CETA is open-source, reproducible and extensible.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116780383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crafting collocated interactions: exploring physical representations of personal data","authors":"Maria Karyda","doi":"10.1145/3098279.3119927","DOIUrl":"https://doi.org/10.1145/3098279.3119927","url":null,"abstract":"This PhD project explores a third wave of research on Mobile Collocated Interactions, which focuses on craft. Strongly inspired by the field of Data Physicalization it aims to explore how would people physically share (physiological) personal data in collocated activities. In achieving that it investigates potential relationships between personal data and meaningful personal objects for individuals. Future steps involve prototyping towards crafting collocated interactions with personal data.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122701610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The UX of IoT: unpacking the internet of things","authors":"Scott Jenson","doi":"10.1145/3098279.3119838","DOIUrl":"https://doi.org/10.1145/3098279.3119838","url":null,"abstract":"When discussing the Internet of Things (IoT), product concepts usually involve overly complex systems with baroque-like setup and confusing behaviors. This workshop will step a bit back from the hype and create a richer, more nuanced way of talking about the IoT. The workshop will start with a structure to the UX of IoT, creating a UX taxonomy and then challenge participants to \"think small\". Special focus will be put on the Physical Web, a lightweight technology that lets any place or device wirelessly broadcast a URL, unlocking very simple and lightweight interactions. Participants will be provoked to think: how can we reduce an IoT concept to the bare minimum? Can we focus on user needs and not be carried away by the technology to create something lightweight and simple? Workshop participants are expected to come prepared with one or two IoT scenarios they would like to work on; then, through a series of exercises, refine one of these down into a much simpler, user-focused design.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick
{"title":"Creating community fountains by (re-)designing the digital layer of way-finding pillars","authors":"Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick","doi":"10.1145/3098279.3122135","DOIUrl":"https://doi.org/10.1145/3098279.3122135","url":null,"abstract":"Way-finding pillars for tourists aid them in navigating an unknown area. The pillars show nearby points of interest, offer information about public transport and provide a scale for the neighbourhood. Through a series of studies with tourists and locals, we establish their different needs. In this space, we developed Mappy, a mobile application which augments and enhances way-finding pillars with an added digital layer. Mappy opens up opportunities for reappropriation of, and engagement with, the pillars. Seeing the pillars beyond their initial use case by involving a diverse range of people let us develop the digital layer and subsequently overall meaning of way-finding pillars further: as \"community fountains\" they engage locals and tourists alike and can provoke encounters between them.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}