Arinobu Niijima, T. Isezaki, Ryosuke Aoki, Tomoki Watanabe, Tomohiro Yamada
{"title":"Controlling Maximal Voluntary Contraction of the Upper Limb Muscles by Facial Electrical Stimulation","authors":"Arinobu Niijima, T. Isezaki, Ryosuke Aoki, Tomoki Watanabe, Tomohiro Yamada","doi":"10.1145/3173574.3173968","DOIUrl":"https://doi.org/10.1145/3173574.3173968","url":null,"abstract":"In this paper, we propose to use facial electrical stimulation to control maximal voluntary contraction (MVC) of the upper limbs. The method is based on a body mechanism in which the contraction of the masseter muscles enhances MVC of the limb muscles. Facial electrical stimulation is applied to the masseter muscles and the lips. The former is to enhance the MVC by causing involuntary contraction of the masseter muscles, and the latter is to suppress the MVC by interfering with voluntary contraction of the masseter muscles. In a user study, we used electromyography sensors on the upper limbs to evaluate the effects of the facial electrical stimulation on the MVC of the upper limbs. The experimental results show that the MVC was controlled by the facial electrical stimulation. We assume that the proposed method is useful for sports athletes because the MVC is linked to sports performance.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73723545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Considering Agency and Data Granularity in the Design of Visualization Tools","authors":"G. Méndez, Miguel A. Nacenta, Uta Hinrichs","doi":"10.1145/3173574.3174212","DOIUrl":"https://doi.org/10.1145/3173574.3174212","url":null,"abstract":"Previous research has identified trade-offs when it comes to designing visualization tools. While constructive \"bottom-up' tools promote a hands-on, user-driven design process that enables a deep understanding and control of the visual mapping, automated tools are more efficient and allow people to rapidly explore complex alternative designs, often at the cost of transparency. We investigate how to design visualization tools that support a user-driven, transparent design process while enabling efficiency and automation, through a series of design workshops that looked at how both visualization experts and novices approach this problem. Participants produced a variety of solutions that range from example-based approaches expanding constructive visualization to solutions in which the visualization tool infers solutions on behalf of the designer, e.g., based on data attributes. On a higher level, these findings highlight agency and granularity as dimensions that can guide the design of visualization tools in this space.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74882520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Newn, Fraser Allison, Eduardo Velloso, F. Vetere
{"title":"Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay","authors":"Joshua Newn, Fraser Allison, Eduardo Velloso, F. Vetere","doi":"10.1145/3173574.3173835","DOIUrl":"https://doi.org/10.1145/3173574.3173835","url":null,"abstract":"In competitive co-located gameplay, players use their opponents' gaze to make predictions about their plans while simultaneously managing their own gaze to avoid giving away their plans. This socially competitive dimension is lacking in most online games, where players are out of sight of each other. We conducted a lab study using a strategic online game; finding that (1) players are better at discerning their opponent's plans when shown a live visualisation of the opponent's gaze, and (2) players who are aware that their gaze is tracked will manipulate their gaze to keep their intentions hidden. We describe the strategies that players employed, to various degrees of success, to deceive their opponent through their gaze behaviour. This gaze-based deception adds an effortful and challenging aspect to the competition. Lastly, we discuss the various implications of our findings and its applicability for future game design.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72939611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Far Is Up?: Bringing the Counterpointed Triad Technique to Digital Storybook Apps","authors":"B. Sargeant, F. Mueller","doi":"10.1145/3173574.3174093","DOIUrl":"https://doi.org/10.1145/3173574.3174093","url":null,"abstract":"Interactive storybooks, such as those available on the iPad, offer multiple ways to convey a story, mostly through visual, textual and audio content. How to effectively deliver this combination of content so that it supports positive social and educational development in pre-literate children is relatively underexplored. In order to address this issue we introduce the \"Counterpointed Triad Technique\". Drawing from traditional literary theory we design visual, textual and audio content that each conveys different aspects of a story. We explore the use of this technique through a storybook we designed ourselves called \"How Far Is Up?\". A study involving 26 kindergarten children shows that \"How Far Is Up?\" can engage pre-literature children while they are reading alone and also when they are reading with an adult. Based on our craft knowledge and study findings, we present a set of design strategies that aim to provide designers with practical guidance on how to create engaging interactive digital storybooks.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78202714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In the Eye of the Student: An Intangible Cultural Heritage Experience, with a Human-Computer Interaction Twist","authors":"Danilo Giglitto, Shaimaa Y. Lazem, Anne Preston","doi":"10.1145/3173574.3173864","DOIUrl":"https://doi.org/10.1145/3173574.3173864","url":null,"abstract":"We critically engage with CHI communities emerging outside the global North (ArabHCI and AfriCHI) to explore how participation is configured and enacted within socio-cultural and political contexts fundamentally different from Western societies. We contribute to recent discussions about postcolonialism and decolonization of HCI by focusing on non-Western future technology designers. Our lens was a course designed to engage Egyptian students with a local yet culturally-distant community to design applications for documenting intangible heritage. Through an action research, the instructors reflect on selected students' activities. Despite deploying a flexible learning curriculum that encourages greater autonomy, the students perceived themselves with less agency than other institutional stakeholders involved in the project. Further, some of them struggled to empathize with the community as the impact of the cultural differences on configuring participation was profound. We discuss the implications of the findings on HCI education and in international cross-cultural design projects.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77274920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Scanning Brains to Reading Minds: Talking to Engineers about Brain-Computer Interface","authors":"Nick Merrill, J. Chuang","doi":"10.1145/3173574.3173897","DOIUrl":"https://doi.org/10.1145/3173574.3173897","url":null,"abstract":"We presented software engineers in the San Francisco Bay Area with a working brain-computer interface (BCI) to surface the narratives and anxieties around these devices among technical practitioners. Despite this group's heterogeneous beliefs about the exact nature of the mind, we find a shared belief that the contents of the mind will someday be \"read' or \"decoded' by machines. Our findings help illuminate BCI's imagined futures among engineers. We highlight opportunities for researchers to involve themselves preemptively in this nascent space of intimate biosensing devices, suggesting our findings' relevance to long-term futures of privacy and cybersecurity.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76157552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Full-Body Ownership Illusion Can Change Our Emotion","authors":"Joohee Jun, Myeongul Jung, So-yeon Kim, K. Kim","doi":"10.1145/3173574.3174175","DOIUrl":"https://doi.org/10.1145/3173574.3174175","url":null,"abstract":"Recent advances in technology have allowed users to experience an illusory feeling of full body ownership of a virtual avatar. Such virtual embodiment has the power to elicit perceptual, behavioral or cognitive changes related to oneself, however, its emotional effects have not yet been rigorously examined. To address this issue, we investigated emotional changes as a function of the level of the illusion (Study 1) and whether changes in the facial expression of a virtual avatar can modulate the effects of the illusion (Study 2). The results revealed that stronger illusory feelings of full body ownership were induced in the synchronous condition, and participants reported higher valence in the synchronous condition in both Studies 1 and 2. The results from Study 2 suggested that the facial expression of a virtual avatar can modulate participants' emotions. We discuss the prospects of the development of therapeutic techniques using such illusions to help people with emotion-related symptoms such as depression and social anxiety.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77492429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miriam Greis, Aditi Joshi, Ken Singer, A. Schmidt, Tonja Machulla
{"title":"Uncertainty Visualization Influences how Humans Aggregate Discrepant Information","authors":"Miriam Greis, Aditi Joshi, Ken Singer, A. Schmidt, Tonja Machulla","doi":"10.1145/3173574.3174079","DOIUrl":"https://doi.org/10.1145/3173574.3174079","url":null,"abstract":"The number of sensors in our surroundings that provide the same information steadily increases. Since sensing is prone to errors, sensors may disagree. For example, a GPS-based tracker on the phone and a sensor on the bike wheel may provide discrepant estimates on traveled distance. This poses a user dilemma, namely how to reconcile the conflicting information into one estimate. We investigated whether visualizing the uncertainty associated with sensor measurements improves the quality of users' inference. We tested four visualizations with increasingly detailed representation of uncertainty. Our study repeatedly presented two sensor measurements with varying degrees of inconsistency to participants who indicated their best guess of the \"true\" value. We found that uncertainty information improves users' estimates, especially if sensors differ largely in their associated variability. Improvements were larger for information-rich visualizations. Based on our findings, we provide an interactive tool to select the optimal visualization for displaying conflicting information.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77609684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Presenting The Accessory Approach: A Start-up's Journey Towards Designing An Engaging Fall Detection Device","authors":"Trine Møller","doi":"10.1145/3173574.3174133","DOIUrl":"https://doi.org/10.1145/3173574.3174133","url":null,"abstract":"This paper explores a design experiment concerning the development of a personalised and engaging wearable fall detection device customised for care home residents. The design experiment focuses on a start-up company's design process, which utilises a new design approach, which I name the accessory approach, to accommodate given cultural fit purposes of a wearer. Influenced by accessory design, that belong neither to fashion nor jewellery, the accessory approach is a way of designing wearables that involve both functional and expressive qualities including the wearer's physical, psychological and social needs. The accessory approach is proven to enable first hand insight of the wearer's preferences, leading to in-depth knowledge and enhanced iterative processes, which support the design of a customised device. This type of knowledge is important for the HCI community as it brings accessory design disciplines into play when wanting to understand and design for individual needs, creating engaging wearables design.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79289714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, A. Bulling
{"title":"Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices","authors":"Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, A. Bulling","doi":"10.1145/3173574.3174198","DOIUrl":"https://doi.org/10.1145/3173574.3174198","url":null,"abstract":"Learning-based gaze estimation has significant potential to enable attentive user interfaces and gaze-based interaction on the billions of camera-equipped handheld devices and ambient displays. While training accurate person- and device-independent gaze estimators remains challenging, person-specific training is feasible but requires tedious data collection for each target device. To address these limitations, we present the first method to train person-specific gaze estimators across multiple devices. At the core of our method is a single convolutional neural network with shared feature extraction layers and device-specific branches that we train from face images and corresponding on-screen gaze locations. Detailed evaluations on a new dataset of interactions with five common devices (mobile phone, tablet, laptop, desktop computer, smart TV) and three common applications (mobile game, text editing, media center) demonstrate the significant potential of cross-device training. We further explore training with gaze locations derived from natural interactions, such as mouse or touch input.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81603553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}