Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology最新文献

筛选
英文 中文
Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones 自动混合形状个性化忠实的脸动画使用商品智能手机
Timo Menzel, M. Botsch, Marc Erich Latoschik
{"title":"Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones","authors":"Timo Menzel, M. Botsch, Marc Erich Latoschik","doi":"10.1145/3562939.3565622","DOIUrl":"https://doi.org/10.1145/3562939.3565622","url":null,"abstract":"Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121036800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Assessment of Instructor’s Capacity in One-to-Many AR Remote Instruction Giving 一对多AR远程教学教师能力评估
Mai Otsuki, Tzu-Yang Wang, H. Kuzuoka
{"title":"Assessment of Instructor’s Capacity in One-to-Many AR Remote Instruction Giving","authors":"Mai Otsuki, Tzu-Yang Wang, H. Kuzuoka","doi":"10.1145/3562939.3565631","DOIUrl":"https://doi.org/10.1145/3562939.3565631","url":null,"abstract":"In this study, we focused on one-to-many remote collaboration which requires more mental resources from the remote instructor than the case of one-to-one since it is \"multitasking\". The main contribution of our study is that we assessed instructor’s capacity in one-to-many AR remote instruction giving both subjectively and objectively. We compared the remote instructor’s workload while interacting with a different number of local workers, assuming tasks at an industrial site. The results showed that the instructors perceived stronger workload and the communication quality became lower when interacting with multiple local workers. Based on the results, we discussed how to support the remote instructor in a one-to-many AR remote collaboration.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124406885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sign Language in Immersive VR: Design, Development, and Evaluation of a Testbed Prototype 沉浸式VR中的手语:设计、开发和测试平台原型的评估
Elena Dzardanova, Vlasios Kasapakis, S. Vosinakis, Konstantina Psarrou
{"title":"Sign Language in Immersive VR: Design, Development, and Evaluation of a Testbed Prototype","authors":"Elena Dzardanova, Vlasios Kasapakis, S. Vosinakis, Konstantina Psarrou","doi":"10.1145/3562939.3565676","DOIUrl":"https://doi.org/10.1145/3562939.3565676","url":null,"abstract":"Immersive Virtual Reality (IVR) systems support several modalities such as body, finger, eye, and facial expressions tracking, thus they can support sign-language-based communication. The combined utilization of tracking technologies requires careful evaluation to ensure high-fidelity transference of body posture, gestures, and facial expressions in real-time. This paper presents the design, development and evaluation of an IVR system utilizing state-of-the-art tracking options. The system is evaluated by certified sign language teachers to detect usability issues and examine appropriate methodology for large-scale follow-up evaluation by users fluent in sign language.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating the Perceived Realism of the Other User’s Look-Alike Avatars 调查其他用户的仿真头像的真实感
Aisha Frampton-Clerk, Oyewole Oyekoya
{"title":"Investigating the Perceived Realism of the Other User’s Look-Alike Avatars","authors":"Aisha Frampton-Clerk, Oyewole Oyekoya","doi":"10.1145/3562939.3565636","DOIUrl":"https://doi.org/10.1145/3562939.3565636","url":null,"abstract":"There are outstanding questions regarding the fidelity of realistic look-alike avatars that show that there is still substantial development to be done, especially as the virtual world plays a more vital role in our education, work and recreation. The use of look-alike avatars could completely change how we interact virtually. This paper investigates which features of other people’s look-alike avatars influence our perceived realism. Four levels of avatar representations were assessed in this pilot study: a static avatar, a static avatar with lip sync corresponding to an audio recording, full face animation with audio and a full body animation. Results show that full-face and body animations are very important in increasing the perceived realism of avatars. More importantly, participants found the lip sync animation more unsettling (uncanny valley effect) than any of the other animations. The results have implications for the perception of other people’s look-alike avatars in collaborative virtual environments.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129904935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment 利用虚拟现实食品环境研究城市食品环境中的个体食品消费行为
Talia Attar, Oyewole Oyekoya, M. Horlyck-Romanovsky
{"title":"Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment","authors":"Talia Attar, Oyewole Oyekoya, M. Horlyck-Romanovsky","doi":"10.1145/3562939.3565685","DOIUrl":"https://doi.org/10.1145/3562939.3565685","url":null,"abstract":"The objective of this research was to explore whether virtual reality can be used to study individual food consumer decision-making and behavior through a public health lens by developing a simulation of an urban food environment that included a street-level scene and three prototypical stores. Twelve participants completed the simulation and a survey. Preliminary results showed that 72.7% of participants bought food from the green grocer, 18.2% from the fast food store, and 9.1% from the supermarket. The mean presence score was 38.9 out of 49 and the mean usability score was 85.9 out of 100. This experiment demonstrates that virtual reality should be further considered as a tool for studying food consumer behavior within a food environment.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131229290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeARportation: A Remote Real-time Neural Rendering Framework 一个远程实时神经渲染框架
Yuichi Hiroi, Yuta Itoh, J. Rekimoto
{"title":"NeARportation: A Remote Real-time Neural Rendering Framework","authors":"Yuichi Hiroi, Yuta Itoh, J. Rekimoto","doi":"10.1145/3562939.3565616","DOIUrl":"https://doi.org/10.1145/3562939.3565616","url":null,"abstract":"While presenting a photorealistic appearance plays a major role in immersion in Augmented Virtuality environment, displaying that of real objects remains a challenge. Recent developments in photogrammetry have facilitated the incorporation of real objects into virtual space. However, reproducing complex appearances, such as subsurface scattering and transparency, still requires a dedicated environment for measurement and possesses a trade-off between rendering quality and frame rate. Our NeARportation framework combines server–client bidirectional communication and neural rendering to resolve these trade-offs. Neural rendering on the server receives the client’s head posture and generates a novel-view image with realistic appearance reproduction that is streamed onto the client’s display. By applying our framework to a stereoscopic display, we confirm that it can display a high-fidelity appearance on full-HD stereo videos at 35-40 frames per second (fps) according to the user’s head motion.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"348 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123361998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Size Does Matter: An Experimental Study of Anxiety in Virtual Reality 尺寸很重要:虚拟现实中焦虑的实验研究
Junyi Shen, I. Kitahara, Shinichi Koyama, Qiaoge Li
{"title":"Size Does Matter: An Experimental Study of Anxiety in Virtual Reality","authors":"Junyi Shen, I. Kitahara, Shinichi Koyama, Qiaoge Li","doi":"10.1145/3562939.3565683","DOIUrl":"https://doi.org/10.1145/3562939.3565683","url":null,"abstract":"The emotional response of users induced by VR scenarios has become a topic of interest, however, whether changing the size of objects in VR scenes induces different levels of anxiety remains a question to be studied. In this study, we conducted an experiment to initially reveal how the size of a large object in a VR environment affects changes in participants’ (N = 38) anxiety level and heart rate. To holistically quantify the size of large objects in the VR visual field, we used the omnidirectional field of view occupancy (OFVO) criterion for the first time to represent the dimension of the object in the participant’s entire field of view. The results showed that the participants’ heartbeat and anxiety while viewing the large objects were positively and significantly correlated to OFVO. These study reveals that the increase of object size in VR environments is accompanied by a higher degree of user’s anxiety.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"378 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133176208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models 基于隐式模型的无监督域自适应单幅图像雕塑三维重建
Ziyi Chang, G. Koulieris, Hubert P. H. Shum
{"title":"3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models","authors":"Ziyi Chang, G. Koulieris, Hubert P. H. Shum","doi":"10.1145/3562939.3565632","DOIUrl":"https://doi.org/10.1145/3562939.3565632","url":null,"abstract":"Acquiring the virtual equivalent of exhibits, such as sculptures, in virtual reality (VR) museums, can be labour-intensive and sometimes infeasible. Deep learning based 3D reconstruction approaches allow us to recover 3D shapes from 2D observations, among which single-view-based approaches can reduce the need for human intervention and specialised equipment in acquiring 3D sculptures for VR museums. However, there exist two challenges when attempting to use the well-researched human reconstruction methods: limited data availability and domain shift. Considering sculptures are usually related to humans, we propose our unsupervised 3D domain adaptation method for adapting a single-view 3D implicit reconstruction model from the source (real-world humans) to the target (sculptures) domain. We have compared the generated shapes with other methods and conducted ablation studies as well as a user study to demonstrate the effectiveness of our adaptation method. We also deploy our results in a VR application.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PORTAL: Portal Widget for Remote Target Acquisition and Control in Immersive Virtual Environments PORTAL:用于在沉浸式虚拟环境中远程目标获取和控制的门户小部件
Dongyun Han, Donghoon Kim, Isaac Cho
{"title":"PORTAL: Portal Widget for Remote Target Acquisition and Control in Immersive Virtual Environments","authors":"Dongyun Han, Donghoon Kim, Isaac Cho","doi":"10.1145/3562939.3565639","DOIUrl":"https://doi.org/10.1145/3562939.3565639","url":null,"abstract":"This paper introduces PORTAL (POrtal widget for Remote Target Acquisition and controL) that allows the user to interact with out-of-reach objects in a virtual environment. We describe the PORTAL interaction technique for placing a portal widget and interacting with target objects through the portal. We conduct two formal user studies to evaluate PORTAL for selection and manipulation functionalities. The results show PORTAL supports participants to interact with remote objects successfully and precisely. Following that, we discuss its potential and limitations, and future works.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114068471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic X-Ray Vision in Mixed Reality 混合现实中的动态x射线视觉
Hung-Jui Guo, J. Bakdash, L. Marusich, B. Prabhakaran
{"title":"Dynamic X-Ray Vision in Mixed Reality","authors":"Hung-Jui Guo, J. Bakdash, L. Marusich, B. Prabhakaran","doi":"10.1145/3562939.3565675","DOIUrl":"https://doi.org/10.1145/3562939.3565675","url":null,"abstract":"X-ray vision, a technique that allows users to see through walls and other obstacles, is a popular technique for Augmented Reality (AR) and Mixed Reality (MR). In this paper, we demonstrate a dynamic X-ray vision window that is rendered in real-time based on the user’s current position and changes with movement in the physical environment. Moreover, the location and transparency of the window are also dynamically rendered based on the user’s eye gaze. We build this X-ray vision window for a current state-of-the-art MR Head-Mounted Device (HMD) – HoloLens 2 [5] by integrating several different features: scene understanding, eye tracking, and clipping primitive.","PeriodicalId":134843,"journal":{"name":"Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121558444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信