{"title":"A smartphone prototype for touch interaction on the whole device surface","authors":"Huy Viet Le, Sven Mayer, Patrick Bader, N. Henze","doi":"10.1145/3098279.3122143","DOIUrl":"https://doi.org/10.1145/3098279.3122143","url":null,"abstract":"Previous research proposed a wide range of interaction methods and use cases based on the previously unused back side and edge of a smartphone. Common approaches to implementing Back-of-Device (BoD) interaction include attaching two smartphones back to back and building a prototype completely from scratch. Changes in the device's form factor can influence hand grip and input performance as shown in previous work. Further, the lack of an established operating system and SDK requires more effort to implement novel interaction methods. In this work, we present a smartphone prototype that runs Android and has a form factor nearly identical to an off-the-shelf smartphone. It further provides capacitive images of the hand holding the device for use cases such as grip-pattern recognition. We describe technical details and share source files so that others can re-build our prototype. We evaluated the prototype with 8 participants to demonstrate the data that can be retrieved for an exemplary grip classification.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131287902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bilal, A. Rextin, Ahmad Kakakhel, Mehwish Nasim
{"title":"Roman-txt: forms and functions of roman urdu texting","authors":"A. Bilal, A. Rextin, Ahmad Kakakhel, Mehwish Nasim","doi":"10.1145/3098279.3098552","DOIUrl":"https://doi.org/10.1145/3098279.3098552","url":null,"abstract":"In this paper, we present a user study conducted on students of a local university in Pakistan and collected a corpus of Roman Urdu text messages. We were interested in forms and functions of Roman Urdu text messages. To this end, we collected a mobile phone usage dataset. The data consists of 116 users and 346, 455 text messages. Roman Urdu text, is the most widely adopted style of writing text messages in Pakistan. Our user study leads to interesting results, for instance, we were able to quantitatively show that a number of words are written using more than one spelling; most participants of our study were not comfortable in English and hence they write their text messages in Roman Urdu; and the choice of language adopted by the participants sometimes varies according to who the message is being sent. Moreover we found that many young students send text messages(SMS) of intimate nature.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131537977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gwang-Hyeon Yeom, Garam Lee, Dayoung Jeong, J. Rhee, Jun-Dong Cho
{"title":"Fam-On: family shared time tracker to improve their emotional bond","authors":"Gwang-Hyeon Yeom, Garam Lee, Dayoung Jeong, J. Rhee, Jun-Dong Cho","doi":"10.1145/3098279.3122149","DOIUrl":"https://doi.org/10.1145/3098279.3122149","url":null,"abstract":"As the number of dual-income household's increases, time spent by parents with children has been decreasing. To solve this issue, we have designed our system, Fam-On Platform. This platform is intended that it has increased the quantity of time by making parents recognized the lack of parenting time through time measurement. On the other hands, showing the amount of parenting time, it tries to decrease the imbalance of parenting time between fathers and mother as well. In this process, we manufactured physical wearable watch devices to promote children's interest and provide direct feedback to the family. Also, we tried to improve family bond by family-sharing time, providing contents to share among family members through application including gamification elements. We have completed a prototype of our platform and conducted pilot test targeting one child. We could gain the results we intended and find issues we need to improve further.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Camera-based mobile electronic travel aids support for cognitive mapping of unknown spaces","authors":"Limin Zeng, Markus Simros, G. Weber","doi":"10.1145/3098279.3098563","DOIUrl":"https://doi.org/10.1145/3098279.3098563","url":null,"abstract":"Blind and visually impaired people often use an \"object-to-object\" strategy to explore unknown spaces through physical contact via their canes or bodies. Camera-based mobile electronic travel aids (ETAs) not only offer a larger work range than a white cane while detecting obstacles, but also enable users to recognise objects of interest without physical contact. In this paper, we conducted a case study with seven visually impaired participants, to investigate how they use a depth-sensing camera-based ETA for exploring unknown spaces, and how they reconstruct cognitive mappings of surrounding objects. We found that camera-based mobile ETAs would assist visually impaired users to explore the surrounding environment effectively and without physical contact. Furthermore, our original study indicates ETA users change their strategies for exploring the surrounding environment from using an \"object-to-object\" approach to an \"observation-point to observation-point\" strategy. Participants improved their cognitive mappings of the surrounding environment compared to white cane users.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123373922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MobiLearn go: mobile microlearning as an active, location-aware game","authors":"Sonya Cates, D. Barron, Patrick Ruddiman","doi":"10.1145/3098279.3122146","DOIUrl":"https://doi.org/10.1145/3098279.3122146","url":null,"abstract":"Mobile technologies hold great potential to make studying both more effective and more enjoyable. In this work we present a mobile, microlearning application. Our system is designed with two goals: be flexible enough to support learning in any subject and encourage frequent short study sessions in a variety of contexts. We discuss the use of our application to assess the feasibility of microlearning for non-language learning and the relationship between the physical location of study sessions and information retention.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122867845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmenting human interaction capabilities with proximity, natural gestures, and eye gaze","authors":"Mihai Bâce","doi":"10.1145/3098279.3119924","DOIUrl":"https://doi.org/10.1145/3098279.3119924","url":null,"abstract":"Nowadays, humans are surrounded by many complex computer systems. When people interact among each other, they use multiple modalities including voice, body posture, hand gestures, facial expressions, or eye gaze. Currently, computers can only understand a small subset of these modalities, but such cues can be captured by an increasing number of wearable devices. This research aims to improve traditional human-human and human-machine interaction by augmenting humans with wearable technology and developing novel user interfaces. More specifically, (i) we investigate and develop systems that enable a group of people in close proximity to interact using in-air hand gestures and facilitate effortless information sharing. Additionally, we focus on (ii) eye gaze which can further enrich the interaction between humans and cyber-physical systems.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Garam Lee, Luis Cavazos Quero, J. Yang, Hyunhee Jung, Jooyoung Son, Jun-Dong Cho
{"title":"Slate master: a tangible Braille slate tutor for mobile devices","authors":"Garam Lee, Luis Cavazos Quero, J. Yang, Hyunhee Jung, Jooyoung Son, Jun-Dong Cho","doi":"10.1145/3098279.3122151","DOIUrl":"https://doi.org/10.1145/3098279.3122151","url":null,"abstract":"The development of the information technologies has benefited education through the integration of technologies and didactic tools to ease and enhance the education experience in and outside of the classroom. After a survey in several welfare centers, it was revealed that the visually impaired community has yet to benefit from the technology integration. In this work, we introduce Slate Master, a mobile device didactic tool for the visually impaired designed to ease learning how to use the Braille slate. The Braille slate is a tool used by the visually impaired to manually write Braille encoded text on paper. Slate Master is composed by a Braille tutor mobile application and a custom input interface that mimics the use of the Braille slate. We also present the insights obtained from a preliminary study performed with six Braille education experts that led to the design and development of Slate Master.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transition animations support orientation in mobile interfaces without increased user effort","authors":"Jonas F. Kraft, J. Hurtienne","doi":"10.1145/3098279.3098566","DOIUrl":"https://doi.org/10.1145/3098279.3098566","url":null,"abstract":"Some animations in mobile user interfaces aim at supporting user orientation by facilitating users to build a mental model of the UI's structure. Possible drawbacks are that animations are time-consuming and that complex and distracting animations may increase users' mental workload. These effects of orientation animations are investigated in an empirical study. Participants either used an animated or a non-animated version of a mobile movie recommender app. The results imply that animations can support users in building more accurate mental models of the app's structure and enhance gesture-based interaction. No additional costs in terms of time or mental workload were incurred when using the animations. Thus, lightweight orientation animations can have large potential benefits at little cost.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121930584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The car as an environment for mobile devices","authors":"Bastian Pfleging, A. Kun, N. Broy","doi":"10.1145/3098279.3119916","DOIUrl":"https://doi.org/10.1145/3098279.3119916","url":null,"abstract":"The objective of this tutorial is to provide MobileHCI newcomers to the domain of automotive user interfaces (AutomotiveUI) with an introduction and overview of the field. The tutorial will introduce the specifics and challenges of in-vehicle user interfaces that set this field apart from others. With a clear focus on the integration of mobile devices into the car, we will provide an overview of the specific requirements of AutomotiveUI, discuss the design of such interfaces, also with regard to standards and guidelines. We further outline how to evaluate interfaces in the car, discuss the challenges with upcoming automated driving and present trends and challenges in this domain.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124104555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using frame of mind: documenting reminiscence through unstructured digital picture interaction","authors":"Benett Axtell, Cosmin Munteanu","doi":"10.1145/3098279.3125438","DOIUrl":"https://doi.org/10.1145/3098279.3125438","url":null,"abstract":"Mobile technologies have made family photo collections extremely portable. People can now carry all their pictures with them wherever they go and show them to others in any setting with smartphones or tablets. However, current options for portable photo viewing are not intended for in-person sharing and reminiscence. Frame of Mind presents a new way to interact with digital pictures on a touch screen that encourages storytelling through its free-flowing interaction using the metaphor of looking at pictures on a table top. This allows family reminiscence to be lightweight, portable, and more accessible by supporting photo viewing on tablets that can have access to complete picture collections. So Frame of Mind moves towards digital tools that support our current photo viewing and sharing activities.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128912012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}