{"title":"Instant Difficulty Adjustment: Predicting Success Rate of VR Kendama when Changing the Difficulty Level","authors":"Yusuke Goutsu, T. Inamura","doi":"10.1145/3582700.3583954","DOIUrl":"https://doi.org/10.1145/3582700.3583954","url":null,"abstract":"This paper presents a task difficulty adjustment method that allows the user to reach desired success rate instantly using VR technology. We propose a methodology based on a Gaussian process dynamical model (GPDM) to model the user’s skill from a small number of past performance observations, and predict future performance at a targeted difficulty level under consideration of model uncertainty. As a task to be performed within a VR environment, we focus on Kendama (a cup-and-ball sports game), in which the cup size is changeable to adjust the difficulty level. In the experiment, we evaluated the personalized skill model with participants who performed the VR Kendama. Our results indicate that the GPDM-based approach accurately reflects the users’ skills, and the predicted success rate when changing the difficulty level is close to the actual success rate even with a small number of trials. This instant difficulty adjustment can therefore help users to receive a pleasant user experience.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125503351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Posture on Virtual Walking Experience Using Foot Vibrations","authors":"Junya Nakamura, Y. Ikei, M. Kitazaki","doi":"10.1145/3582700.3583699","DOIUrl":"https://doi.org/10.1145/3582700.3583699","url":null,"abstract":"The virtual walking systems that do not involve physical movement of the legs has the advantage of being able to be experienced while seated or supine. The sensation of virtual walking can be effectively elicited through the combination of optic flow and rhythmic foot vibrations. However, the effects of posture have yet to be fully understood. In light of this, the present study sought to investigate the effects of posture (standing, sitting, and lying supine) on the virtual walking experience utilizing foot vibrations and optic flow. Our hypothesis posited that the synchronization of foot vibrations would augment the walking sensation even in a seated or supine position. Our findings indicate that synchronized foot vibrations produced the sensation of virtual walking in all three postures, with the standing posture eliciting the strongest virtual walking sensation. No significant differences were observed between the sitting and supine postures. These results suggest that virtual walking systems utilizing foot vibrations have the potential to provide a certain degree of walking experience even for individuals unable to leave their bed.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122333588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. A. Bruce, Linda Lightley, C. Wright, Annessa Rebair, L. Holmquist
{"title":"The Intuitive Jacket: Creating A Wearable Interface Acknowledging the Role of the Body in Trauma Mental Health","authors":"T. A. Bruce, Linda Lightley, C. Wright, Annessa Rebair, L. Holmquist","doi":"10.1145/3582700.3583703","DOIUrl":"https://doi.org/10.1145/3582700.3583703","url":null,"abstract":"We present a novel, wearable interface as an investigative tool for digital mental healthcare. Immersive environments for therapeutic interventions can potentially involve the whole body of the user in the experience, but often the interaction is through handheld controllers or virtual buttons. Building on suggestions in a previous study from users of an immersive environment for trauma mental healthcare, we developed a new interface, The Intuitive Jacket, to offer control and personalization to the therapeutic process. Our design consists of a physical jacket, allowing users to perform an emotionally powerful interaction: By touching the region of their heart at the end of the session, they “close the door” to the trauma the have been processing. The jacket contains a conductive thread sensor that communicates the garment wearer's gesture to the software via Bluetooth. It was created via a multidisciplinary collaboration between HCI, Fashion Design and Electronics.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arpit Bhatia, Aryan Saini, Isha Kalra, Manideepa Mukherjee, Aman Parnami
{"title":"DUMask: A Discrete and Unobtrusive Mask-Based Interface for Facial Gestures","authors":"Arpit Bhatia, Aryan Saini, Isha Kalra, Manideepa Mukherjee, Aman Parnami","doi":"10.1145/3582700.3582726","DOIUrl":"https://doi.org/10.1145/3582700.3582726","url":null,"abstract":"Interactions using the face, not only enable multi-tasking but also enable us to create hands-free applications. Previous works in HCI used sensors attached directly to the person’s face or inside their mouth. However, a mask, which has now become a norm in our everyday life and is socially acceptable, has rarely been used to explore facial interactions. We designed, “DUMask”, an interface that uses face parts covered by a mask to discretely enable 14 (+1 default) interactions. DUMask uses an infrared camera embedded inside an off-the-shelf face mask to recognize the gestures, and we demonstrate the effectiveness of our interface through in-lab studies. We conducted two user studies evaluating the experience of both the wearer and the onlooker, which validated that the interface is indeed inconspicuous and unobtrusive.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131793692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dennis Wittchen, Valentin Martinez-Missir, Sina Mavali, Nihar Sabnis, Courtney N. Reed, P. Strohmeier
{"title":"Designing Interactive Shoes for Tactile Augmented Reality","authors":"Dennis Wittchen, Valentin Martinez-Missir, Sina Mavali, Nihar Sabnis, Courtney N. Reed, P. Strohmeier","doi":"10.1145/3582700.3582728","DOIUrl":"https://doi.org/10.1145/3582700.3582728","url":null,"abstract":"Augmented Footwear has become an increasingly common research area. However, as this is a comparatively new direction in HCI, researchers and designers are not able to build upon common platforms. We discuss the design space of shoes for augmented tactile reality, focussing on physiological and biomechanical factors as well as technical considerations. We present an open source example implementation from this space, intended as an experimental platform for vibrotactile rendering and tactile AR and provide details on experiences that could be evoked with such a system. Anecdotally, the new prototype provided experiences of material properties like compliance, as well as altered perception of their movements and agency. We intend our work to lower the barrier of entry for new researchers and to support the field of tactile rendering in footwear in general by making it easier to compare results between studies.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133785613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kanyu Chen, Jiawen Han, Holger Baldauf, Ziyue Wang, Dunya Chen, Akira Kato, Jamie A. Ward, K. Kunze
{"title":"Affective Umbrella – A Wearable System to Visualize Heart and Electrodermal Activity, towards Emotion Regulation through Somaesthetic Appreciation","authors":"Kanyu Chen, Jiawen Han, Holger Baldauf, Ziyue Wang, Dunya Chen, Akira Kato, Jamie A. Ward, K. Kunze","doi":"10.1145/3582700.3582727","DOIUrl":"https://doi.org/10.1145/3582700.3582727","url":null,"abstract":"In this paper, we introduce Affective Umbrella, a novel system to record, analyze and visualize physiological data in real time via an umbrella handle. We implement a biofeedback loop design in the system that triggers visualization changes to reflect and regulate emotions through somaesthetic appreciation. We report the methodology, processes, and results of data reliability and visual feedback impact on emotions. We evaluated the system using a real-life user study (n=21) in rainy weather at night. The statistical results demonstrate the potential of applying the visualization of biofeedback to regulate emotional arousal with a significantly higher (p=.0022) score, a lower (p=.0277) dominance than baseline from self-reported SAM Scale, and physiological arousal, which was shown to be significantly increased (p<.0001) with biofeedback in terms of pNN50 and a significant difference in terms of RMSSD. There was no significant difference in terms of emotional valence changes from SAM scale. Furthermore, we compared the difference between two biofeedback patterns (mirror and inversion). The mirror effect was with a significantly higher emotional arousal than the inversion effect (p=.0277) from SAM results and was with a significantly lower RMSSD performance than the inversion effect (p<.0001). This work demonstrates the potential for capturing physiological data using an umbrella handle and using this data to influence a user’s emotional state via lighting effects.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122092629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuma Takada, N. Kumasaki, T. Froese, Kazuhisa Shibata, Jun Nishida, Shunichi Kasahara
{"title":"ShadowClones: an Interface to Maintain a Multiple Sense of Body-space Coordination in Multiple Visual Perspectives","authors":"Kazuma Takada, N. Kumasaki, T. Froese, Kazuhisa Shibata, Jun Nishida, Shunichi Kasahara","doi":"10.1145/3582700.3582706","DOIUrl":"https://doi.org/10.1145/3582700.3582706","url":null,"abstract":"In this paper, we propose ShadowClones, an interface that supports interactions in which a single user can interact with multiple bodies in multiple spaces. Recent teleoperation technologies have allowed a user controlling multiple objects simultaneously, but at the same time, it also exhibited a significant challenge, which can be attributed to the high cognitive load caused by switching and recogning various spaces/perspectives repeatedly and instantly. To tackle this challenge, by taking advantage of pre-attentive visual cues for users’ simultaneous information processing, we designed and evaluated a new user interface, called Shadow Clones, that projects self-body information in unattended areas for increasing the awareness of body-space relationships and allowing users to seamlessly switch across different visual perspectives from avatars or remote robots. We then explored the proposed approach through a simple visual reaching task with a performance evaluation in terms of task completion time and success rate. The results showed superior performance when compared with a condition that presents no projections of users’ body movements in unattended areas. We conclude by discussing possible mechanisms of this enhancement as well as two potential scenarios using the shadow clones approach, including new entertainment content for virtual reality e-sports and multiple robot teleoperation such as in a construction site or a disaster site, without compromising operational performance.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124633203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, M. Hirose
{"title":"Effect of Weight Adjustment in Virtual Co-embodiment During Collaborative Training","authors":"Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, M. Hirose","doi":"10.1145/3582700.3582703","DOIUrl":"https://doi.org/10.1145/3582700.3582703","url":null,"abstract":"Acquisition of motor skills plays an essential role in various contexts, such as sports, factory jobs, and nursing. For effective motor skill learning, “virtual co-embodiment” has been proposed as a novel virtual reality (VR) based method in which a virtual avatar is controlled based on the weighted average of the learner’s and teacher’s movements. Using virtual co-embodiment, a learner can learn the motor intention because they can feel a strong sense of agency in the avatar’s movements modified by the teacher. However, after the assistance of the virtual co-embodiment vanishes, there is a performance drop problem; the learner cannot move as they learned, even if they understand the correct movement or motor intention, because the difference in body positions between the co-embodied avatar and the learner requires the latter to move differently after termination of the assistance. One way to match their positions is to increase the weight assigned to the learner’s weight. However, simply assigning the learner a high weight does not allow the teacher to correct the avatar’s movements and convey the correct movement and motor intention. By allowing the teacher a greater influence in the early stages of learning, and decreasing the influence as the learning progresses, it is expected to gradually allow the student to learn to operate independently. Therefore, we propose a method to prevent performance drop by adjusting weights according to the learning performance, thereby maintaining a high learning efficiency and helping advanced learners learn to independently demonstrate their abilities. In this study, we experimented with dual task learning to evaluate the automation of movement, which is considered an essential element of motor skills. We compared the performance drop when the virtual co-embodiment assist was terminated with static or adjusted weights based on the performance. Consequently, although the learning efficiency was slightly lower, the use of adjusted weights resulted in a significantly smaller performance drop after the termination of virtual co-embodiment assistance than that after the use of static weight.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127433205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroto Oshimi, Monica Perusquía-Hernández, N. Isoyama, H. Uchiyama, K. Kiyokawa
{"title":"LocatAR: An AR Object Search Assistance System for a Shared Space","authors":"Hiroto Oshimi, Monica Perusquía-Hernández, N. Isoyama, H. Uchiyama, K. Kiyokawa","doi":"10.1145/3582700.3582712","DOIUrl":"https://doi.org/10.1145/3582700.3582712","url":null,"abstract":"Item-finding tasks due to memory lapse are costly activities commonly experienced by many people. However, conventional systems are not suitable for use in a collaborative environment. Therefore, we propose a multi-functional, pre-registration-free, and 3D location-based item management system. The system has two main functions: registration and search. The automatic registration is performed by image-based item movement recognition from the user’s grasping and placing motions. The registered item movement data comprises the item category, and the start and end locations. We ensure privacy protection by storing item movement data without images. Also, we provide a user interaction to refuse to share the items with other users. The search is based on the item list or item location. The location-based search is performed by specifying where the user last saw the item. To optimize and test the performance of the system, we first performed parameter optimization and then conducted a user study investigating the performance of a search task. The parameter optimization performed in the registration system led to the discovery of optimal values that are difficult to reach empirically. The search experiment showed that the proposed system’s search and guidance functions are effective as an assistance system for finding items, both in terms of search time and user experience. Overall, our system demonstrated the potential to be a useful assistance system for managing items in a shared space. We further discuss the possibility of further exploiting the limited registered information by treating item location as an identifier of the moved item.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127899487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takafumi Watanabe, Tomoya Sasaki, Zendai Kashino, M. Inami
{"title":"Human Coincident Robot: A Non-contact Surrounding Robot Sharing the Coordinate with a Human Inside","authors":"Takafumi Watanabe, Tomoya Sasaki, Zendai Kashino, M. Inami","doi":"10.1145/3582700.3582724","DOIUrl":"https://doi.org/10.1145/3582700.3582724","url":null,"abstract":"The use of wearable robots is gaining traction in the field of human augmentation due to their potential to augment human physical capabilities. However, the design and specification of these robots are often constrained by the physical limitations of the human body. This paper proposes a novel approach called the Human Coincident Robot (HCR), which maintains a fixed positional and rotational relationship with a human without physical contact using a mobile mechanism. We verify the feasibility of this concept through the design and implementation of a two-wheeled mobile robot controlled by sliding mode control. Our experiments with the prototype demonstrate that the implemented system can be used in a situation where a human walks naturally and suggest that the HCR has the potential to overcome the limitations of conventional wearable robots and provide new opportunities for human augmentation.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121700684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}