{"title":"A Left-Hand Advantage: Motor Asymmetry in Touchless Input","authors":"Pantea Habibi, Debaleena Chattopadhyay","doi":"10.1145/3290607.3312974","DOIUrl":"https://doi.org/10.1145/3290607.3312974","url":null,"abstract":"Touchless gesture is a common input type when interacting with large displays or virtual and augmented reality applications. In touchless input, users may alternate between hands or use bimanual gestures. But touchless performance in nondominant hands is little explored---even though cognitive science and neuroscience studies show cerebral hemispheric specialization causes performance differences between dominant and nondominant hands in lateralized individuals. Drawing on theories that account for between-hand differences in rapid-aimed movements, we characterize motor asymmetry in touchless input. Results from a controlled study (n = 20, right-handed) show freehand touchless input produces significantly smaller between-hand performance differences than a mouse in pointing and dragging. We briefly discuss the HCI implications of motor asymmetry in an input type.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129836727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Math Graphs for the Visually Impaired: Audio Presentation of Elements of Mathematical Graphs","authors":"Jeongyeon Kim, Yoonah Lee, Inho Seo","doi":"10.1145/3290607.3308452","DOIUrl":"https://doi.org/10.1145/3290607.3308452","url":null,"abstract":"The sense of sight takes a dominating role in learning mathematical graphs. Most visually impaired students drop out of mathematics because necessary content is inaccessible. Sonification and auditory graphs have been the primary methods of representing data through sound. However, the representation of mathematical elements of graphs is still unexplored. The experiments in this paper investigate optimal methods for representing mathematical elements of graphs with sound. The results indicate that the methods of design in this study are effective for describing mathematical elements of graphs, such as axes, quadrants and differentiability. These findings can help visually impaired learners to be more independent, and also facilitate further studies on assistive technology.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129176218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Experience (UX) Research in Games","authors":"L. Nacke, Pejman Mirza-Babaei, Anders Drachen","doi":"10.1145/3290607.3298826","DOIUrl":"https://doi.org/10.1145/3290607.3298826","url":null,"abstract":"This course will allow participants to understand the complexities of games user research methods for user experience research in games. For this, we have put together three-course sessions at CHI (80 minutes each) on applications of different user research methods in games evaluation and playtesting exercises to help participants turn player feedback into actionable design recommendations. This course consists of three interactive face-to-face units during CHI 2019.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123932154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ching Liu, Chi-Lan Yang, J. J. Williams, Hao-Chuan Wang
{"title":"NoteStruct","authors":"Ching Liu, Chi-Lan Yang, J. J. Williams, Hao-Chuan Wang","doi":"10.1145/3290607.3312878","DOIUrl":"https://doi.org/10.1145/3290607.3312878","url":null,"abstract":"Note-taking activities in physical classrooms are ubiquitous and have been emerging in online learning. To investigate how to better support online learners to take notes while learning with videos, we compared free-form note-taking with a prototype system, NoteStruct, which prompts learners to perform a series of note-taking activities. NoteStruct enables learners to insert annotations on transcripts of video lectures and then engages learners in reinterpreting and synthesizing their notes after watching a video. In a study with a sample of Mechanical Turk workers (N=80), learners took longer and more extensive notes with NoteStruct, although using NoteStruct versus free-form note-taking did not impact short-term learning outcome. These longer notes were also less likely to include verbatim copied video transcripts, but more likely to include elaboration and interpretation. We demonstrate how NoteStruct influences note-taking during online video learning.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123950148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gözel Shakeri, Euan Freeman, W. Frier, Michele Iodice, Benjamin Long, Orestis Georgiou, Carl Andersson
{"title":"Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback","authors":"Gözel Shakeri, Euan Freeman, W. Frier, Michele Iodice, Benjamin Long, Orestis Georgiou, Carl Andersson","doi":"10.1145/3290607.3313264","DOIUrl":"https://doi.org/10.1145/3290607.3313264","url":null,"abstract":"Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which conveys information to three different sensory channels and levitates small objects simultaneously.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123978634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Socially-Focused Technologies that Can Help Children with Cancer Feel More Like Children Despite their Disease, Treatment and Environment","authors":"Jillian L. Warren","doi":"10.1145/3290607.3299078","DOIUrl":"https://doi.org/10.1145/3290607.3299078","url":null,"abstract":"This describes the background and motivation for dedicating my PhD to the exploration of socially-focused technologies for childhood cancer patients. Very little work has been done, especially in the field of human and child computer interaction, to explore the ways in which the hospital context in conjunction with the cancer experience impact children's social and emotional well-being during middle childhood (ages 6-12), and in turn how technology could improve their experience. My research seeks to (1) empower children with cancer by providing a platform for them to voice their own experiences with isolation, loneliness, and loss of a normal childhood, as well as how technology may better support their needs, (2) contribute design knowledge about how to support meaningful social interaction and play that is age and 'ability' appropriate, and (3) provide insight for future design and evaluation studies by better understanding constraints/opportunities for socially-focused technologies intended for use in a real world pediatric hospital environment.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolan Peng, Jin Huang, Linghan Li, Chen Gao, Hui Chen, Feng Tian, Hongan Wang
{"title":"Beyond Horror and Fear: Exploring Player Experience Invoked by Emotional Challenge in VR Games","authors":"Xiaolan Peng, Jin Huang, Linghan Li, Chen Gao, Hui Chen, Feng Tian, Hongan Wang","doi":"10.1145/3290607.3312832","DOIUrl":"https://doi.org/10.1145/3290607.3312832","url":null,"abstract":"Digital gameplay experience depends not only on the type of challenge that the game provides, but also on how the challenge be presented. With the introduction of a novel type of emotional challenge and the increasing popularity of virtual reality (VR), there is a need to explore player experience invoked by emotional challenge in VR games. We selected two games that provides emotional challenge and conducted a 24-subject experiment to compare the impact of a VR and monitor-display version of each game on multiple player experiences. Preliminary results show that many positive emotional experiences have been enhanced significantly with VR while negative emotional experiences such as horror and fear have less been influenced; participants' perceived immersion and presence were higher when using VR than using monitor-display. Our finding of VR's expressive capability in emotional experiences may encourage more design and research with regard to emotional challenge in VR games.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121278766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using a Conversational Agent to Facilitate Non-native Speaker's Active Participation in Conversation","authors":"Zixuan Guo, T. Inoue","doi":"10.1145/3290607.3313075","DOIUrl":"https://doi.org/10.1145/3290607.3313075","url":null,"abstract":"When a non-native speaker talks with a native speaker, he/she sometimes feels hard to take speaking turns due to language proficiency. The resulting conversation between a non-native speaker and a native speaker is not always productive. In this paper, we propose a conversational agent to support a non-native speaker in his/her second language conversation. The agent joins the conversation and makes intervention by a simple script based on turn-taking rules for taking the agent's turn, and gives the next turn to the non-native speaker for prompting him/her to speak. Evaluation of the proposed agent suggested that it successfully facilitated the non-native speaker's participation over 30% of the agent's interventions, and significantly increased the frequency of turn-taking.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121351399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Robinson, Elizabeth Reid, Ansgar E. Depping, R. Mandryk, J. Fey, K. Isbister
{"title":"'In the Same Boat',: A Game of Mirroring Emotions for Enhancing Social Play","authors":"R. Robinson, Elizabeth Reid, Ansgar E. Depping, R. Mandryk, J. Fey, K. Isbister","doi":"10.1145/3290607.3313268","DOIUrl":"https://doi.org/10.1145/3290607.3313268","url":null,"abstract":"Social closeness is important for an individual's health and well-being, and this is especially difficult to maintain over a distance. Games can help with this, to connect and strengthen relationships or create new ones by enabling shared playful experiences. The demo proposed is a game we designed called 'In the Same Boat', a two-player game intended to foster social closeness between players over a distance. We leverage the synchronization of both players' physiological data (heart rate, breathing,facial expressions) mapped to an input scheme to control the movement of a canoe down a river.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121454897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CooperationCaptcha","authors":"Marcel Walch, Mark Colley, Michael Weber","doi":"10.1145/3290607.3313022","DOIUrl":"https://doi.org/10.1145/3290607.3313022","url":null,"abstract":"In the emerging field of automated vehicles (AVs), the many recent advancements coincide with different areas of system limitations. The recognition of objects like traffic signs or traffic lights is still challenging, especially under bad weather conditions or when traffic signs are partially occluded. A common approach to deal with system boundaries of AVs is to shift to manual driving, accepting human factor issues like post-automation effects. We present CooperationCaptcha, a system that asks drivers to label unrecognized objects on the fly, and consequently maintain automated driving mode. We implemented two different interaction variants to work with object recognition algorithms of varying sophistication. Our findings suggest that this concept of driver-vehicle cooperation is feasible, provides good usability, and causes little cognitive load. We present insights and considerations for future research and implementations.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121677305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}