Proceedings of the Augmented Humans International Conference 2023最新文献

筛选
英文 中文
Exoskeleton for the Mind: Exploring Strategies Against Misinformation with a Metacognitive Agent 心灵的外骨骼:利用元认知代理探索对抗错误信息的策略
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582725
Yeongdae Kim, Takane Ueno, Katie Seaborn, Hiroki Oura, Jacqueline Urakami, Yuto Sawa
{"title":"Exoskeleton for the Mind: Exploring Strategies Against Misinformation with a Metacognitive Agent","authors":"Yeongdae Kim, Takane Ueno, Katie Seaborn, Hiroki Oura, Jacqueline Urakami, Yuto Sawa","doi":"10.1145/3582700.3582725","DOIUrl":"https://doi.org/10.1145/3582700.3582725","url":null,"abstract":"Misinformation is a global problem in modern social media platforms with few solutions known to be effective. Social media platforms have offered tools to raise awareness of information, but these are closed systems that have not been empirically evaluated. Others have developed novel tools and strategies, but most have been studied out of context using static stimuli, researcher prompts, or low fidelity prototypes. We offer a new anti-misinformation agent grounded in theories of metacognition that was evaluated within Twitter. We report on a pilot study (n=17) and multi-part experimental study (n=57, n=49) where participants experienced three versions of the agent, each deploying a different strategy. We found that no single strategy was superior over the control. We also confirmed the necessity of transparency and clarity about the agent’s underlying logic, as well as concerns about repeated exposure to misinformation and lack of user engagement.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116795429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CC-Glasses: Color Communication Support for People with Color Vision Deficiency Using Augmented Reality and Deep Learning cc眼镜:使用增强现实和深度学习为有色觉缺陷的人提供色彩交流支持
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582707
Zhenyang Zhu, Jiyi Li, Ying Tang, K. Go, M. Toyoura, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao
{"title":"CC-Glasses: Color Communication Support for People with Color Vision Deficiency Using Augmented Reality and Deep Learning","authors":"Zhenyang Zhu, Jiyi Li, Ying Tang, K. Go, M. Toyoura, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao","doi":"10.1145/3582700.3582707","DOIUrl":"https://doi.org/10.1145/3582700.3582707","url":null,"abstract":"People who suffer from color vision deficiency (CVD) can face difficulties when communicating with others by failing to identify target objects referred by their color names. While most existing studies on CVD compensation have focused on the issue of color contrast loss. Although there are approaches can provide clues of color name to users, these techniques either require training, or cannot protect users’ privacy, i.e., the fact of having CVD. In this paper, based on augmented reality (AR) and deep learning technologies, we propose a novel system to provide supporting information to users affected by CVD for color communication assistance. The state-of-the-art deep neural network (DNN) model for referring segmentation (RS) is adopted to generate supporting information, and AR glasses are utilized for information presentation. To improve the performance of the proposed system further, a new dataset is constructed based on a novel concept called Color–Object Noun Pair. The results of evaluation experiments show that the new dataset can enhance the performance of the adopted DNN model, and the proposed system can help users affected by CVD successfully identify target objects by their color names.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123546186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Derm: Body Surface Deformation Display for Real-World Embodied Interactions 动态皮肤:真实世界具身互动的体表变形显示
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582723
Ryo Murata, Arata Horie, M. Inami
{"title":"Dynamic Derm: Body Surface Deformation Display for Real-World Embodied Interactions","authors":"Ryo Murata, Arata Horie, M. Inami","doi":"10.1145/3582700.3582723","DOIUrl":"https://doi.org/10.1145/3582700.3582723","url":null,"abstract":"The body surface is an essential interface that dynamically reflects states inside and outside the body. To realize a computer mediated embodied interaction, focusing on its characteristic as a visual display, we propose dynamically intervening in the shape of the body surface. In this paper, we define the design requirement for a system that deforms the body surface, organize the design space, and build a prototype. Dynamic Derm is a prototype that dynamically deforms clothes by pushing them up from inside, where and each module can present two degrees of freedom in translation. We investigated the spatial accuracy of the system, the actuator response under load and a clothes deformation as a basic technical evaluation of the system. We also designed several presentation scenarios based on the design space, and conducted a qualitative evaluation of the adequacy of their representations.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116623852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhancing a Recorded Concert Experience in Virtual Reality by Visualizing the Physiological Data of the Audience 通过可视化观众的生理数据来增强虚拟现实中录制的音乐会体验
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3583709
Xiaru Meng, Y. He, K. Kunze
{"title":"Towards Enhancing a Recorded Concert Experience in Virtual Reality by Visualizing the Physiological Data of the Audience","authors":"Xiaru Meng, Y. He, K. Kunze","doi":"10.1145/3582700.3583709","DOIUrl":"https://doi.org/10.1145/3582700.3583709","url":null,"abstract":"This work is a first attempt to visualize the social atmosphere of a audience in a VR experience using their recorded the physiological state, and then presents them to another group of audience members, aiming at a transformative perception from individual to collective experience. A virtual environment is built to share the audience’s aesthetic feelings and emotions, creating a novel way of non-verbal communication in performance scenarios that aim at enhancing audio-visual perception through physiological sensing and emotional experience sharing. This experiment was designed to investigate the effect of physiological data-based reproduction of musical performances in VR on affective states.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134241283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile Vectors for Omnidirectional Arm Guidance 全向臂制导的触觉矢量
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582701
H. Elsayed, Martin Weigel, Johannes Semsch, M. Mühlhäuser, Martin Schmitz
{"title":"Tactile Vectors for Omnidirectional Arm Guidance","authors":"H. Elsayed, Martin Weigel, Johannes Semsch, M. Mühlhäuser, Martin Schmitz","doi":"10.1145/3582700.3582701","DOIUrl":"https://doi.org/10.1145/3582700.3582701","url":null,"abstract":"We introduce and study two omnidirectional movement guidance techniques that use two vibrotactile actuators to convey a movement direction. The first vibrotactile actuator defines the starting point and the second actuator communicates the endpoint of the direction vector. We investigate two variants of our tactile vectors using phantom sensations for 3D arm motion guidance. The first technique uses two sequential stimuli to communicate the movement vector (Sequential Tactile Vectors). The second technique creates a continuous vibration vector using body-penetrating phantom sensations (Continuous Tactile Vectors). In a user study (N = 16), we compare these two new techniques with state of the art push and pull metaphors. Our findings show that users are 20% more accurate in their movements with sequential tactile vectors.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132506320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Coach: A Motor Skill Training System using Motion Discrepancy Detection AI教练:使用运动差异检测的运动技能训练系统
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582710
Chen-Chieh Liao, D. Hwang, Erwin Wu, H. Koike
{"title":"AI Coach: A Motor Skill Training System using Motion Discrepancy Detection","authors":"Chen-Chieh Liao, D. Hwang, Erwin Wu, H. Koike","doi":"10.1145/3582700.3582710","DOIUrl":"https://doi.org/10.1145/3582700.3582710","url":null,"abstract":"Spatial and temporal clues found in a professional’s motion are essential for designing a training system for learning a motor skill. We investigate the potential of using neural networks to learn spatial and temporal features of advanced players in sports and to detect the fine-grained differences between motions. As a training system, we implement an AI Coach prototype application that finds the differences between two input motions and visualizes a recommendation motion for the users to correct their forms. In the user study, we investigate the effects of the proposed AI Coach and discuss the findings based on quantitative questionnaires and qualitative interviews. In the study, the proposed system can help the user better understand the difference between them and the coach. The study also reveals the necessity of coaching beginners in the early learning phases.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
GAuze-MIcrosuture-FICATION: Gamification in Microsuture training with real-time feedback 纱布-微缝合:游戏化微缝合训练与实时反馈
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582704
Yuka Tashiro, Shio Miyafuji, D. Hwang, S. Kiyofuji, Taichi Kin, T. Igarashi, H. Koike
{"title":"GAuze-MIcrosuture-FICATION: Gamification in Microsuture training with real-time feedback","authors":"Yuka Tashiro, Shio Miyafuji, D. Hwang, S. Kiyofuji, Taichi Kin, T. Igarashi, H. Koike","doi":"10.1145/3582700.3582704","DOIUrl":"https://doi.org/10.1145/3582700.3582704","url":null,"abstract":"Microscopic suturing in neurosurgery is a challenging medical technique that requires time to master. Since skilled surgeons are too busy to spend much time with novice surgeons, novice surgeons need to train alone in monotonous tasks to acquire skills. To address this problem, this study proposes a system that incorporates gamification elements, such as scoring and displaying real-time feedback, to improve motivation for training. This system detects the technical factors necessary for microscopic suturing from video capture, and calculates a score using these factors. According to neurosurgeons, suturing has three important factors: speed, accuracy, and carefulness. These technical factors are detected by tracking instruments and gauze using machine learning and image processing. The experiment was conducted with ten novices using this system. The results of this experiment showed that the system is easy to use and contributed to increased motivation, according to the User Experience Questionnaire (UEQ) and System Usability Scale (SUS).","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the Design Space of Assistive Augmentation 探索辅助增强的设计空间
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582729
Suranga Nanayakkara, M. Inami, F. Mueller, Jochen Huber, Chitralekha Gupta, C. Jouffrais, K. Kunze, Rakesh Patibanda, Samantha W. T. Chan, Moritz Alexander Messerschmidt
{"title":"Exploring the Design Space of Assistive Augmentation","authors":"Suranga Nanayakkara, M. Inami, F. Mueller, Jochen Huber, Chitralekha Gupta, C. Jouffrais, K. Kunze, Rakesh Patibanda, Samantha W. T. Chan, Moritz Alexander Messerschmidt","doi":"10.1145/3582700.3582729","DOIUrl":"https://doi.org/10.1145/3582700.3582729","url":null,"abstract":"Assistive Augmentation, the intersection of human-computer interaction, assistive technologies and human augmentation, was broadly discussed at the CHI’14 workshop and subsequently published as an edited volume on Springer Cognitive Science and Technology series. In this workshop, the aim is to propose a more structured way to design Assistive Augmentations. In addition, we aim to discuss the challenges and opportunities for Assistive Augmentations in light of current trends in research and technology. Participants of the workshop need to submit a short position paper or interactive system demonstration, which will be peer-reviewed. The selected position papers and demos will kick off a face-to-face discussion at the workshop. Participants will also be invited to extend the workshop discussion into a journal submission to a venue such as the Foundations and Trends in Human-Computer Interaction.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standing Balance Improved by Electrical Muscle Stimulation to Popliteus Muscles 电刺激腘肌改善站立平衡
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582711
Masatoshi Shindo, Arinobu Niijima
{"title":"Standing Balance Improved by Electrical Muscle Stimulation to Popliteus Muscles","authors":"Masatoshi Shindo, Arinobu Niijima","doi":"10.1145/3582700.3582711","DOIUrl":"https://doi.org/10.1145/3582700.3582711","url":null,"abstract":"Fall prevention is extremely important when it comes to ensuring the well-being of an aging population. There are currently two main fall prevention strategies: wearing exoskeletons to assist with postural stability and using electrical muscle stimulation (EMS) for training the lower limb muscles. However, the former has issues regarding size and weight, and the latter does not have immediate improvement effects. In this paper, we propose a small and lightweight EMS-based system that immediately improves standing balance by applying EMS to the popliteus muscles behind the knees, which leads to unlocking and bending the knees. Bending the knees lowers the center of mass of the human body and stabilizes the standing balance against unexpected perturbations. We conducted a user study with 20 participants to evaluate the proposed system and found that the postural sway decreased significantly more when EMS was applied within 200 ms after perturbation occurrence compared to when no EMS was applied.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123521950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of realistic facial animation of a CG avatar speaking a moraic language 生成真实的面部动画的CG化身说一个moraic语言
Proceedings of the Augmented Humans International Conference 2023 Pub Date : 2023-03-12 DOI: 10.1145/3582700.3582705
Ryoto Kato, Yusuke Kikuchi, Vibol Yem, Y. Ikei
{"title":"Generation of realistic facial animation of a CG avatar speaking a moraic language","authors":"Ryoto Kato, Yusuke Kikuchi, Vibol Yem, Y. Ikei","doi":"10.1145/3582700.3582705","DOIUrl":"https://doi.org/10.1145/3582700.3582705","url":null,"abstract":"We propose a new method for generating real-time realistic facial animation using face mesh data corresponding to the fifty-six C+V (Consonant and Vowel) type morae that form the basis of Japanese speech. This method produces facial expressions by weighted addition of fifty-three face meshes based on the mapping of voice streaming to registered morae in real-time. Both photogrammetric models and existing off-the-shelf head models can be used as face meshes. Natural facial expressions of speech can be synthesized from a modeling to live animation in less than two hours. The results of a user study of our method showed that the facial expression during Japanese speech was more natural than popular real-time methods to generate facial animation, English-base Oculus Lipsync and volume intensity based facial animations.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123594101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信