{"title":"KissGlass","authors":"R. Li, Juyoung Lee, Woontack Woo, Thad Starner","doi":"10.1145/3384657.3384801","DOIUrl":"https://doi.org/10.1145/3384657.3384801","url":null,"abstract":"Cheek kissing is a common greeting in many countries around the world. Many parameters are involved when performing the kiss, such as which side to begin the kiss on and how many times the kiss is performed. These parameters can be used to infer one's social and physical context. In this paper, we present KissGlass, a system that leverages off-the-shelf smart glasses to recognize different kinds of cheek kissing gestures. Using a dataset we collected with 5 participants performing 10 gestures, our system obtains 83.0% accuracy in 10-fold cross validation and 74.33% accuracy in a leave-one-user-out user independent evaluation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125657578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards A Wearable for Deep Water Blackout Prevention","authors":"Frederik Wiehr, Andreas Höh, A. Krüger","doi":"10.1145/3384657.3385329","DOIUrl":"https://doi.org/10.1145/3384657.3385329","url":null,"abstract":"Freediving relies on a diver's ability to hold his breath until resurfacing. Many fatal accidents in freediving are caused by a sudden blackout of the diver right before resurfacing. In this work, we propose a wearable prototype for monitoring oxygen saturation underwater and conceptualize an early warning system with regard to the diving depth. Our predictive algorithm estimates the latest point of return in order to emerge with a sufficient oxygen level to prevent a blackout and notifies the diver via an acoustic signal.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127647192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Novel Input and Output opportunities using an Implanted Magnet","authors":"P. Strohmeier, Jess McIntosh","doi":"10.1145/3384657.3384785","DOIUrl":"https://doi.org/10.1145/3384657.3384785","url":null,"abstract":"In this case study, we discuss how an implanted magnet can support novel forms of input and output. By measuring the relative position between the magnet and an on-body device, local position of the device can be used for input. Electromagnetic fields can actuate the magnet to provide output by means of in-vivo haptic feedback. Traditional tracking options would struggle tracking the input methods we suggest, and the in-vivo sensations of vibration provided as output differ from the experience of vibrations applied externally - our data suggests that in-vivo vibrations are mediated by different receptors than external vibration. As the magnet can be easily tracked as well as actuated it provides opportunities for encoding information as material experiences.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra
{"title":"Living Bits: Opportunities and Challenges for Integrating Living Microorganisms in Human-Computer Interaction","authors":"Pat Pataranutaporn, Angela Vujic, D. S. Kong, P. Maes, Misha Sra","doi":"10.1145/3384657.3384783","DOIUrl":"https://doi.org/10.1145/3384657.3384783","url":null,"abstract":"There are trillions of living biological \"computers\" on, inside, and around the human body: microbes. Microbes have the potential to enhance human-computer interaction (HCI) in entirely new ways. Advances in open-source biotechnology have already enabled designers, artists, and engineers to use microbes in redefining wearables, games, musical instruments, robots, and more. \"Living Bits\", inspired by Tangible Bits, is an attempt to think beyond the traditional boundaries that exist between biological cells and computers for integrating microorganism in HCI. In this work we: 1) outline and inspire the possibility for integrating organic and regenerative living systems in HCI; 2) explore and characterize human-microbe interactions across contexts and scales; 3) provide principles for stimulating discussions, presentations, and brainstorms of microbial interfaces. We aim to make Living Bits accessible to researchers across HCI, synthetic biology, biotechnology, and interaction design to explore the next generation of biological HCI.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132878416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition","authors":"Katsutoshi Masai, K. Kunze, M. Sugimoto","doi":"10.1145/3384657.3384787","DOIUrl":"https://doi.org/10.1145/3384657.3384787","url":null,"abstract":"Non-verbal information is essential to understand intentions and emotions and to facilitate social interaction between humans and between humans and computers. One reliable source of such information is the eyes. We investigated the eye-based interaction (recognizing eye gestures or eye movements) using an eyewear device for facial expression recognition. The device incorporates 16 low-cost optical sensors. The system allows hands-free interaction in many situations. Using the device, we evaluated three eye-based interactions. First, we evaluated the accuracy of detecting the gestures with nine participants. The average accuracy of detecting seven different eye gestures is 89.1% with user-dependent training. We used dynamic time warping (DTW) for gesture recognition. Second, we evaluated the accuracy of eye gaze position estimation with five users holding a neutral face. The system showed potential to track the approximate direction of the eyes, with higher accuracy in detecting position y than x. Finally, we did a feasibility study of one user reading jokes while wearing the device. The system was capable of analyzing facial expressions and eye movements in daily contexts.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132155717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka
{"title":"EgoSpace","authors":"Yuya Adachi, Haoran Xie, T. Torii, Haopeng Zhang, Ryo Sagisaka","doi":"10.1145/3384657.3385328","DOIUrl":"https://doi.org/10.1145/3384657.3385328","url":null,"abstract":"In this work, we propose a novel wearable device to augment the user's egocentric space to a wide range. To achieve this goal, the proposed device provides bidirectional projection using a head-mounted wearable projector and two dihedral mirrors. The included angle of the mirrors were set to reflect the projected image in front of and behind the user. A prototype system is developed to explore possible applications using the proposed device in different scenarios, such as riding a bike and map navigation.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ExemPoser","authors":"Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto","doi":"10.1145/3384657.3384788","DOIUrl":"https://doi.org/10.1145/3384657.3384788","url":null,"abstract":"It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze
{"title":"Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles","authors":"Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze","doi":"10.1145/3384657.3384780","DOIUrl":"https://doi.org/10.1145/3384657.3384780","url":null,"abstract":"While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115825364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley
{"title":"WristLens","authors":"Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley","doi":"10.1145/3384657.3384797","DOIUrl":"https://doi.org/10.1145/3384657.3384797","url":null,"abstract":"WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes
{"title":"Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant","authors":"Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes","doi":"10.1145/3384657.3384799","DOIUrl":"https://doi.org/10.1145/3384657.3384799","url":null,"abstract":"Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present \"Wearable Reasoner\", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}