Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque
{"title":"Prototyping sonic interaction for walking","authors":"Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque","doi":"10.1145/3098279.3122141","DOIUrl":"https://doi.org/10.1145/3098279.3122141","url":null,"abstract":"Sounds play a substantial role in the experience of movement activities such as walking. Drawing on the movement inducing effects of sound, sonic interaction opens up numerous possibilities to modify the walker's movements and experience. We argue that designing sonic interaction for movement activities demands an experiential awareness of the interplay of sound, body movement and use situation, and, propose a prototyping method to understand possibilities and challenges related to the design of mobile sonic interaction. In this paper, we present a rapid prototyping system that enables non-expert users to design sonic interaction for walking and to experience their design in the real-world context. We discuss the way this prototyping system allows designers to experience how their design ideas unfold in mobile use and affect the walking.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128434304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze
{"title":"Language learning on-the-go: opportune moments and design of mobile microlearning sessions","authors":"Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze","doi":"10.1145/3098279.3098565","DOIUrl":"https://doi.org/10.1145/3098279.3098565","url":null,"abstract":"Learning a foreign language is a daunting and time-consuming task. People often lack the time or motivation to sit down and engage with learning content on a regular basis. We present an investigation of microlearning sessions on mobile phones, in which we focus on session triggers, presentation methods, and user context. Therefore, we built an Android app that prompts users to review foreign language vocabulary directly through notifications or through app usage across the day. We present results from a controlled and an in-the-wild study, in which we explore engagement and user context. In-app sessions lasted longer, but notifications added a significant number of \"quick\" learning sessions. 37.6% of sessions were completed in transit, hence learning-on-the-go was well received. Neither the use of boredom as trigger nor the presentation (flashcard and multiple-choice) had a significant effect. We conclude with implications for the design of mobile microlearning applications with context-awareness.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130573573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De
{"title":"TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection","authors":"Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De","doi":"10.1145/3098279.3098564","DOIUrl":"https://doi.org/10.1145/3098279.3098564","url":null,"abstract":"Typing based communication applications on smartphones, like WhatsApp, can induce emotional exchanges. The effects of an emotion in one session of communication can persist across sessions. In this work, we attempt automatic emotion detection by jointly modeling the typing characteristics, and the persistence of emotion. Typing characteristics, like speed, number of mistakes, special characters used, are inferred from typing sessions. Self reports recording emotion states after typing sessions capture persistence of emotion. We use this data to train a personalized machine learning model for multi-state emotion classification. We implemented an Android based smartphone application, called TapSense, that records typing related metadata, and uses a carefully designed Experience Sampling Method (ESM) to collect emotion self reports. We are able to classify four emotion states - happy, sad, stressed, and relaxed, with an average accuracy (AUCROC) of 84% for a group of 22 participants who installed and used TapSense for 3 weeks.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133007536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving software-reduced touchscreen latency","authors":"N. Henze, Sven Mayer, Huy Viet Le, V. Schwind","doi":"10.1145/3098279.3122150","DOIUrl":"https://doi.org/10.1145/3098279.3122150","url":null,"abstract":"The latency of current mobile devices' touchscreens is around 100ms and has widely been explored. Latency down to 2ms is noticeable, and latency as low as 25ms reduces users' performance. Previous work reduced touch latency by extrapolating a finger's movement using an ensemble of shallow neural networks and showed that predicting 33ms into the future increases users' performance. Unfortunately, this prediction has a high error. Predicting beyond 33ms did not increase participants' performance, and the error affected the subjective assessment. We use more recent machine learning techniques to reduce the prediction error. We train LSTM networks and multilayer perceptrons using a large data set and regularization. We show that linear extrapolation causes an 116.7% higher error and the previously proposed ensembles of shallow networks cause a 26.7% higher error compared to the LSTM networks. The trained models, the data used for testing, and the source code is available on GitHub.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat
{"title":"CETA: open, affordable and portable mixed-reality environment for low-cost tablets","authors":"Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat","doi":"10.1145/3098279.3125435","DOIUrl":"https://doi.org/10.1145/3098279.3125435","url":null,"abstract":"Mixed-reality environments allow to combine tangible interaction with digital feedback, empowering interaction designers to take benefits from both real and virtual worlds. This interaction paradigm is also being applied in classrooms for learning purposes. However, most of the times the devices supporting mixed-reality interaction are neither portable nor affordable, which could be a limitation in the learning context. In this paper we propose CETA, a mixed-reality environment using low-cost Android tablets which tackles portability and costs issues. In addition, CETA is open-source, reproducible and extensible.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116780383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crafting collocated interactions: exploring physical representations of personal data","authors":"Maria Karyda","doi":"10.1145/3098279.3119927","DOIUrl":"https://doi.org/10.1145/3098279.3119927","url":null,"abstract":"This PhD project explores a third wave of research on Mobile Collocated Interactions, which focuses on craft. Strongly inspired by the field of Data Physicalization it aims to explore how would people physically share (physiological) personal data in collocated activities. In achieving that it investigates potential relationships between personal data and meaningful personal objects for individuals. Future steps involve prototyping towards crafting collocated interactions with personal data.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122701610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The UX of IoT: unpacking the internet of things","authors":"Scott Jenson","doi":"10.1145/3098279.3119838","DOIUrl":"https://doi.org/10.1145/3098279.3119838","url":null,"abstract":"When discussing the Internet of Things (IoT), product concepts usually involve overly complex systems with baroque-like setup and confusing behaviors. This workshop will step a bit back from the hype and create a richer, more nuanced way of talking about the IoT. The workshop will start with a structure to the UX of IoT, creating a UX taxonomy and then challenge participants to \"think small\". Special focus will be put on the Physical Web, a lightweight technology that lets any place or device wirelessly broadcast a URL, unlocking very simple and lightweight interactions. Participants will be provoked to think: how can we reduce an IoT concept to the bare minimum? Can we focus on user needs and not be carried away by the technology to create something lightweight and simple? Workshop participants are expected to come prepared with one or two IoT scenarios they would like to work on; then, through a series of exercises, refine one of these down into a much simpler, user-focused design.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick
{"title":"Creating community fountains by (re-)designing the digital layer of way-finding pillars","authors":"Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick","doi":"10.1145/3098279.3122135","DOIUrl":"https://doi.org/10.1145/3098279.3122135","url":null,"abstract":"Way-finding pillars for tourists aid them in navigating an unknown area. The pillars show nearby points of interest, offer information about public transport and provide a scale for the neighbourhood. Through a series of studies with tourists and locals, we establish their different needs. In this space, we developed Mappy, a mobile application which augments and enhances way-finding pillars with an added digital layer. Mappy opens up opportunities for reappropriation of, and engagement with, the pillars. Seeing the pillars beyond their initial use case by involving a diverse range of people let us develop the digital layer and subsequently overall meaning of way-finding pillars further: as \"community fountains\" they engage locals and tourists alike and can provoke encounters between them.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing a gaze gesture guiding system","authors":"W. Delamare, Teng Han, Pourang Irani","doi":"10.1145/3098279.3098561","DOIUrl":"https://doi.org/10.1145/3098279.3098561","url":null,"abstract":"We propose the concept of a guiding system specifically designed for semaphoric gaze gestures, i.e. gestures defining a vocabulary to trigger commands via the gaze modality. Our design exploration considers fundamental gaze gesture phases: Exploration, Guidance, and Return. A first experiment reveals that Guidance with dynamic elements moving along 2D paths is efficient and resistant to visual complexity. A second experiment reveals that a Rapid Serial Visual Presentation of command names during Exploration allows for more than 30% faster command retrievals than a standard visual search. To resume the task where the guide was triggered, labels moving from the outward extremity of 2D paths toward the guide center leads to efficient and accurate origin retrieval during the Return phase. We evaluate our resulting Gaze Gesture Guiding system, G3, for interacting with distant objects in an office environment using a head-mounted display. Users report positively on their experience with both semaphoric gaze gestures and G3.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115679765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A mobile game system for improving the speech therapy experience","authors":"Jared Duval","doi":"10.1145/3098279.3119925","DOIUrl":"https://doi.org/10.1145/3098279.3119925","url":null,"abstract":"A lack of intrinsic motivation to practice speech is attributed to tedious and repetitive speech curriculums, but mobile games have been widely recognized as a valid motivator for jaded individuals. SpokeIt is an interactive storybook style speech therapy game that intends to turn practicing speech into a motivating and productive experience for individuals with speech impairments as well as provide speech therapists an important diagnostic tool. In this paper, I discuss the novel intellectual contributions SpokeIt can provide such as an offline critical conversational speech recognition system, and the application of therapy curriculums to mobile platforms, I present conducted research, and consider exciting future work and research directions.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115714121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}