Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology最新文献

筛选
英文 中文
Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR 走这条梁:不同的平衡辅助策略和高度暴露对VR表现和生理唤醒的影响
Dennis Dietz, Carl Oechsner, Changkun Ou, Francesco Chiossi, F. Sarto, Sven Mayer, A. Butz
{"title":"Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR","authors":"Dennis Dietz, Carl Oechsner, Changkun Ou, Francesco Chiossi, F. Sarto, Sven Mayer, A. Butz","doi":"10.1145/3562939.3567818","DOIUrl":"https://doi.org/10.1145/3562939.3567818","url":null,"abstract":"Dynamic balance is an essential skill for the human upright gait; therefore, regular balance training can improve postural control and reduce the risk of injury. Even slight variations in walking conditions like height or ground conditions can significantly impact walking performance. Virtual reality is used as a helpful tool to simulate such challenging situations. However, there is no agreement on design strategies for balance training in virtual reality under stressful environmental conditions such as height exposure. We investigate how two different training strategies, imitation learning, and gamified learning, can help dynamic balance control performance across different stress conditions. Moreover, we evaluate the stress response as indexed by peripheral physiological measures of stress, perceived workload, and user experience. Both approaches were tested against a baseline of no instructions and against each other. Thereby, we show that a learning-by-imitation approach immediately helps dynamic balance control, decreases stress, improves attention focus, and diminishes perceived workload. A gamified approach can lead to users being overwhelmed by the additional task. Finally, we discuss how our approaches could be adapted for balance training and applied to injury rehabilitation and prevention.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123740831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AirHaptics: Vibrotactile Presentation Method using an Airflow from Audio Speakers of Smart Devices AirHaptics:使用智能设备音频扬声器气流的振动触觉呈现方法
Madoka Ito, Ryota Sakuma, H. Ishizuka, T. Hiraki
{"title":"AirHaptics: Vibrotactile Presentation Method using an Airflow from Audio Speakers of Smart Devices","authors":"Madoka Ito, Ryota Sakuma, H. Ishizuka, T. Hiraki","doi":"10.1145/3562939.3565670","DOIUrl":"https://doi.org/10.1145/3562939.3565670","url":null,"abstract":"We perceive vibrotactile stimuli from smart devices such as smartphones when we use various applications. However, vibrators in these devices can present only a specific nearby resonant frequency with enough intensity to perceive, making it challenging to offer various vibrotactile stimuli. In this study, we propose a method to realize a vibrotactile presentation in a wide range of frequencies using airflow vibration generated by a built-in audio speaker of a smart device. We implemented a system based on the proposed method using a smartphone and experimented with measuring the airflow pressure. Moreover, we also propose the application of texture presentation using airflow.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122956299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Pedestrian Safety around Autonomous Delivery Robots in Real Environment with Augmented Reality 利用增强现实技术提高真实环境中自动送货机器人周围行人的安全性
Madoka Inoue, Kensuke Koda, Kelvin Cheng, Toshimasa Yamanaka, Soh Masuko
{"title":"Improving Pedestrian Safety around Autonomous Delivery Robots in Real Environment with Augmented Reality","authors":"Madoka Inoue, Kensuke Koda, Kelvin Cheng, Toshimasa Yamanaka, Soh Masuko","doi":"10.1145/3562939.3565644","DOIUrl":"https://doi.org/10.1145/3562939.3565644","url":null,"abstract":"In recent years, the use of autonomous vehicles and autonomous delivery robots (ADR) has increased. This paper explores how pedestrian safety around moving ADRs can be improved. To reduce pedestrian anxiety, we proposed the display of various real-time information from the ADR in Augmented Reality (AR). A preliminary experiment was conducted in an outdoor environment where an ADR was running, within a 5G network. We found AR has a positive effect in alleviating user anxiety around the ADR.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117309201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An AI-empowered Cloud Solution towards End-to-End 2D-to-3D Image Conversion for Autostereoscopic 3D Display 一种基于人工智能的云解决方案,用于自动立体3D显示的端到端2d到3D图像转换
Jun Wei Lim, Jin Qi Yeo, Xinxing Xia, F. Guan
{"title":"An AI-empowered Cloud Solution towards End-to-End 2D-to-3D Image Conversion for Autostereoscopic 3D Display","authors":"Jun Wei Lim, Jin Qi Yeo, Xinxing Xia, F. Guan","doi":"10.1145/3562939.3565674","DOIUrl":"https://doi.org/10.1145/3562939.3565674","url":null,"abstract":"Autostereoscopic displays allow the users to view the 3D content on electronic displays without wearing any glasses. However, the content for glass-free 3D displays needs to be in 3D format such that novel views could be synthesized. Unfortunately, nowadays images/videos are still normally captured in 2D which cannot be directly utilized for glass-free 3D displays. In this paper, we introduce an AI-empowered cloud solution towards end-to-end 2D-to-3D image conversion for autostereoscopic 3D displays, or “CONVAS (3D)” in short. Taking a single 2D image as the input, CONVAS (3D) is able to automatically convert the input 2D image and generate an image suitable for a target autostereoscopic 3D display. It is implemented on a web-based server such that it can allow the users to submit the conversion task and to retrieve the results without geographical constraints.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"51 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128235589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigation of User Performance in Virtual Reality-based Annotation-assisted Remote Robot Control 基于虚拟现实的注释辅助远程机器人控制中的用户性能研究
Thanh Long Vu, Dac Dang Khoa Nguyen, Sheila Sutjipto, Dinh Tung Le, G. Paul
{"title":"Investigation of User Performance in Virtual Reality-based Annotation-assisted Remote Robot Control","authors":"Thanh Long Vu, Dac Dang Khoa Nguyen, Sheila Sutjipto, Dinh Tung Le, G. Paul","doi":"10.1145/3562939.3565687","DOIUrl":"https://doi.org/10.1145/3562939.3565687","url":null,"abstract":"This poster investigates the use of point cloud processing algorithms to provide annotations for robotic manipulation tasks completed remotely via Virtual Reality (VR). A VR-based system has been developed that receives and visualizes the processed data from real-time RGB-D camera feeds. A real-world robot model has also been developed to provide realistic reactions and control feedback. The targets and the robot model are reconstructed in a VR environment and presented to users in different modalities. The modalities and available information are varied between experimental settings, and the associated task performance is recorded and analyzed. The results accumulated from 192 experiments completed by 8 participants showed that point cloud data is sufficient for completing the task. Additional information, either image stream or preliminary processes presented as annotations, was found to not have a significant impact on the completion time. However, the combination of image stream and colored point cloud data visualization modalities was found to greatly enhance a user’s performance accuracy, with the number of target centers missed being reduced by 40%.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128981587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatar Voice Morphing to Match Subjective and Objective Self Voice Perception 化身声音变形以匹配主观和客观的自我声音感知
Hiiro Okano, Keisuke Mizuno, Haruna Miyakawa, K. Zempo
{"title":"Avatar Voice Morphing to Match Subjective and Objective Self Voice Perception","authors":"Hiiro Okano, Keisuke Mizuno, Haruna Miyakawa, K. Zempo","doi":"10.1145/3562939.3565671","DOIUrl":"https://doi.org/10.1145/3562939.3565671","url":null,"abstract":"We investigated the effect of morphing the avatar’s voice from the user’s voice on its impressions. We also investigated whether the image of morphing differed between those who liked and disliked their voice. The experiment was conducted by morphing the acoustic parameters such as fundamental frequency, spectral envelope, and aperiodic component based on the acoustic signals recorded by the participants themselves, and investigating their impressions of an avatar speaking with that voice. The result showed that those who liked their voice were most impressed by their original voice, while those who disliked it were more impressed by the morphed voice. This suggests that people who dislike their voice tend to seek their ideal in the avatar’s voice.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design and Evaluation of Electrotactile Rendering Effects for Finger-Based Interactions in Virtual Reality 虚拟现实中基于手指交互的电触觉渲染效果设计与评价
Sebastian Vizcay, Panagiotis Kourtesis, F. Argelaguet, C. Pacchierotti, M. Marchal
{"title":"Design and Evaluation of Electrotactile Rendering Effects for Finger-Based Interactions in Virtual Reality","authors":"Sebastian Vizcay, Panagiotis Kourtesis, F. Argelaguet, C. Pacchierotti, M. Marchal","doi":"10.1145/3562939.3565634","DOIUrl":"https://doi.org/10.1145/3562939.3565634","url":null,"abstract":"The use of electrotactile feedback in Virtual Reality (VR) has shown promising results for providing tactile information and sensations. While progress has been made to provide custom electrotactile feedback for specific interaction tasks, it remains unclear which modulations and rendering algorithms are preferred in rich interaction scenarios. In this paper, we propose a unified tactile rendering architecture and explore the most promising modulations to render finger interactions in VR. Based on a literature review, we designed six electrotactile stimulation patterns/effects (EFXs) striving to render different tactile sensations. In a user study (N=18), we assessed the six EFXs in three diverse finger interactions: 1) tapping on a virtual object; 2) pressing down a virtual button; 3) sliding along a virtual surface. Results showed that the preference for certain EFXs depends on the task at hand. No significant preference was detected for tapping (short and quick contact); EFXs that render dynamic intensities or dynamic spatio-temporal patterns were preferred for pressing (continuous dynamic force); EFXs that render moving sensations were preferred for sliding (surface exploration). The results showed the importance of the coherence between the modulation an the interaction being performed and the study proved the versatility of electrotactile feedback and its efficiency in rendering different haptic information and sensations.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"2 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PlayMeBack - Cognitive Load Measurement using Different Physiological Cues in a VR Game 在VR游戏中使用不同生理线索的认知负荷测量
M. Ahmadi, Huidong Bai, A. Chatburn, Burkhard Wuensche, M. Billinghurst
{"title":"PlayMeBack - Cognitive Load Measurement using Different Physiological Cues in a VR Game","authors":"M. Ahmadi, Huidong Bai, A. Chatburn, Burkhard Wuensche, M. Billinghurst","doi":"10.1145/3562939.3565648","DOIUrl":"https://doi.org/10.1145/3562939.3565648","url":null,"abstract":"We present a Virtual Reality (VR) game, PlayMeBack, to investigate cognitive load measurement in interactive VR environments using pupil dilation, Galvanic Skin Response (GSR), Electroencephalogram (EEG) and Heart Rate (HR). The user is shown different patterns of tiles lighting up and is asked to replay the pattern back pressing the tiles in the same sequence they lit up. The task difficulty depends on the length of the observed pattern (3-6 keys). This task is designed to explore the effect of cognitive load on physiological cues, and if pupil dilation, EEG, GSR and HR can be used as measures of cognitive load.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127743762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding Perspectives for Single- and Multi-Limb Movement Guidance in Virtual 3D Environments 虚拟三维环境中单肢和多肢运动指导的理解视角
H. Elsayed, Kenneth Kartono, Dominik Schön, Martin Schmitz, Max Mühlhäuser, Martin Weigel
{"title":"Understanding Perspectives for Single- and Multi-Limb Movement Guidance in Virtual 3D Environments","authors":"H. Elsayed, Kenneth Kartono, Dominik Schön, Martin Schmitz, Max Mühlhäuser, Martin Weigel","doi":"10.1145/3562939.3565635","DOIUrl":"https://doi.org/10.1145/3562939.3565635","url":null,"abstract":"Movement guidance in virtual reality has many applications ranging from physical therapy, assistive systems to sport learning. These movements range from simple single-limb to complex multi-limb movements. While VR supports many perspectives – e.g., first person and third person – it remains unclear how accurate these perspectives communicate different movements. In a user study (N=18), we investigated the influence of perspective, feedback, and movement properties on the accuracy of movement guidance. Participants had on average an angle error of 6.2° for single arm movements, 7.4° for synchronous two arm movements, and 10.3° for synchronous two arm and leg movements. Furthermore, the results show that the two variants of third-person perspectives outperform a first-person perspective for movement guidance (19.9% and 24.3% reduction in angle errors). Qualitative feedback confirms the quantitative data and shows users have a clear preference for third-person perspectives. Through our findings we provide guidance for designers and developers of future VR movement guidance systems.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121936293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Leveraging VR Techniques for Efficient Exploration and Interaction in Large and Complex AR Space with Clipped and Small FOV AR Display 利用虚拟现实技术在大而复杂的AR空间中进行有效的探索和交互,具有剪切和小视场AR显示
Yerin Shin, G. Kim
{"title":"Leveraging VR Techniques for Efficient Exploration and Interaction in Large and Complex AR Space with Clipped and Small FOV AR Display","authors":"Yerin Shin, G. Kim","doi":"10.1145/3562939.3565613","DOIUrl":"https://doi.org/10.1145/3562939.3565613","url":null,"abstract":"In this paper, we propose to take advantage of the digital twinned environment to interact more efficiently in the large and complex AR space in spite of the limited sized and clipped FOV of the AR display. Using the digital twin of the target environment, “magical” VR interaction techniques can be applied, as visualized and overlaid through the small window, while still maintaining the spatial association to the augmented real world. First we consider the use of amplified movement within the corresponding VR twinned space to help the user search, plan, navigate and explore efficiently by providing an effectively larger view and thereby better spatial understanding of the same AR space with less amount of physical movements. Secondly, we also apply the amplified movement and in addition, the stretchable arm to interact with relatively large objects (or largely spaced objects) which cannot be seen in their entirety at a time with the small FOV glass. The results of the experiment with the proposed methods have showed advantages with regards to the interaction performance as the scene became more complex and task more difficult. The work illustrates the concept of and potential for XR based interaction where the user can leverage the advantages of both VR and AR mode operations.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130766593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信