{"title":"UI Binding Transfer for Bone-driven Facial Rigs","authors":"Jing Hou, Zhihe Zhao, Dongdong Weng","doi":"10.1109/VRW58643.2023.00172","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00172","url":null,"abstract":"We propose an automatic method to transfer the UI binding from a rigged model to a new target mesh. We use feed forward neural net-works to find the mapping functions between bones and controllers of source model. The learned mapping networks then become the initial weights of an auto-encoder. Then the auto-encoder is retrained using target controllers-bones pairs obtained by the mesh transfer and bones decoupling method. Our system only requires neutral expression of the target person but allows artists to customize other basic expressions, and is evaluated by the semantic reproducibility of basic expressions and the semantic similarity.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132682291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Study of Cybersickness Prediction in Real Time Using Eye Tracking Data","authors":"S. Shimada, Y. Ikei, N. Nishiuchi, Vibol Yem","doi":"10.1109/VRW58643.2023.00278","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00278","url":null,"abstract":"Cybersickness seriously degrades users' experiences of virtual real-ity (VR). The level of cybersickness is commonly gauged through a simulator sickness questionnaire (SSQ) administered after the expe-rience. However, for observing the user's health and evaluating the VR content/device, measuring the level of cybersickness in real time is essential. In this study, we examined the relationship between eye tracking data and sickness level, then predicted the sickness level using machine learning methods. Some characteristics of eye related indices significantly differed between the sickness and non-sickness groups. The machine learning methods predicted cybersickness in real time with an accuracy of approximately 70%.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132775298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerald Thomas, Sang Won Lee, Alexander Giovannelli, Logan Lane, D. Bowman
{"title":"A Communication-Focused Framework for Understanding Immersive Collaboration Experiences","authors":"Jerald Thomas, Sang Won Lee, Alexander Giovannelli, Logan Lane, D. Bowman","doi":"10.1109/VRW58643.2023.00070","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00070","url":null,"abstract":"The ability to collaborate with other people across barriers created by time and/or space is one of the greatest features of modern communication. Immersive technologies are positioned to enhance this ability to collaborate even further. However, we do not have a firm understanding of how specific immersive technologies, or components thereof, alter the ability for two or more people to communicate, and hence collaborate. In this work-in-progress position paper, we propose a new framework for immersive collaboration experiences and provide an example of how it could be used to understand a hybrid collaboration among two co-located users and one remote user. We are seeking feedback from the community before conducting a formal evaluation of the framework. We also present some future work that this framework could facilitate.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133405159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik
{"title":"Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR","authors":"Florian Kern, Jonathan Tschanter, Marc Erich Latoschik","doi":"10.1109/VRW58643.2023.00109","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00109","url":null,"abstract":"The alignment of virtual to physical surfaces is essential to improve symbolic input and selection in XR. Previous techniques optimized for efficiency can lead to inaccuracies. We investigate regression-based refinement techniques and introduce a surface accuracy eval-uation. The results revealed that refinement techniques can highly improve surface accuracy and show that accuracy depends on the gesture shape and surface dimension. Our reference implementation and dataset are publicly available.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133634337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuri Mikawa, M. Fujiwara, Yasutoshi Makino, H. Shinoda
{"title":"High-speed and Low-Latency Ocular Parallax Rendering Improves Binocular Fusion in Stereoscopic Vision","authors":"Yuri Mikawa, M. Fujiwara, Yasutoshi Makino, H. Shinoda","doi":"10.1109/vrw58643.2023.00210","DOIUrl":"https://doi.org/10.1109/vrw58643.2023.00210","url":null,"abstract":"Most of the current virtual/augmented reality (VR/AR) displays use binocular parallax to present 3D images. However, they do not consider ocular parallax, which refers to the slight movement of the viewpoint with eye rotation, such as saccade. A commercial head-mounted display (HMD) has a large latency in realizing ocular parallax rendering. In our study, a high-speed (1,000 fps) and low-latency (average 4.072 ms) ocular-parallax-rendering device was assembled, and its effect was examined wherein a reasonable approximation algorithm for viewpoint was applied. A user study experiment employing a random dot stereogram (RDS) showed improvements in binocular fusion, the causes of which are comprehensively discussed.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and User Experience Evaluation of 3D Product Information in XR Shopping Application","authors":"Kaitong Qin, Yankun Zhen, Tianshu Dong, Liuqing Chen, Lingyun Sun, Yumou Zhang, Tingting Zhou","doi":"10.1109/VRW58643.2023.00200","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00200","url":null,"abstract":"Current 3D product information pages are believed to enrich the online shopping experience since it provides a more immersive experience. However, existing solutions still retain the 2D UI elements in the product information presentation design, preventing users from fully immersing themselves in the virtual environment and further degrading the shopping experience. In order to evaluate the user experience of 3D product information in XR shopping applications, we first construct a design space based on previous design cases of product information presentation in virtual environments and produce nine new solutions by combining elements in the design space.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131895596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast flame recognition algorithm base on segmentation network","authors":"Chunyu Niu, Hui Guo, Yong Wang","doi":"10.1109/VRW58643.2023.00099","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00099","url":null,"abstract":"To solve the low recognition rate of the network to flame and keep the accuracy, we propose an Instance segmentation model for recognizing and locating flames more accurate This network is improved based on the deep learning model Mask R-CNN, it introduces four key components:(1) After analyzing the effects of space and channel attention, we used an efficient convolution channel attention. (2) By comparing the convolution kernel size, an optimized dilated convolution is added to the network, (3) To eliminate redundancy, reducing the depth of the backbone while guaranteeing the accuracy of the network. (4) Finally, Adding a flame extraction algorithm behind the head. Compared with Mask R-CNN, the model size is reduced by 16.3MB, and the recognition accuracy of flame is improved by 1.7%, The comparison shows that the network can also greatly improve the recognition effect of small flames.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134174782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE VR 2023 Workshops Workshop: 1st IEEE VR Internal Workshop on Industrial Metaverse (I-Meta)","authors":"Hongming Cai, Shuangjiu Xiao, Bingqing Shen","doi":"10.1109/vrw58643.2023.00097","DOIUrl":"https://doi.org/10.1109/vrw58643.2023.00097","url":null,"abstract":"Industrial metaverse is the combination and application of virtual reality with other technologies in the industrial field, describing and connecting people, machines, materials, processes, and environments within the virtual space. Through the technologies such as digital twins, it seamlessly integrates and enables the interaction of the physical industrial space and the virtual industrial space, reciprocally improving industrial activities. By comprehensively simulating or emulating multiple scenarios, routes, and stages within the virtual space, it can cover the entire chain of industry and is shaping a new field with a promising and valuable future. Aiming at providing an open and exciting platform for promoting a new industrial digital ecology, the industrial metaverse (I-Meta) workshop focuses on constructing new industrial metaverses, from multiple fundamental dimensions to cutting-edge enabling technologies, and supporting typical industrial scenarios. This half-day workshop will invite all researchers and practitioners to participate and discuss new theories, architectures, technologies, patterns, or application scenarios of industrial metaverse, to share new scientific findings or practical achievements, and to describe the future vision of industrial metaverse for fostering new ideas.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115441547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"eXtended Reality Vest: A New Approach to Demonstration-Based Learning","authors":"Allison Bayro, Bryan Havens, Hee-Ran Jeong","doi":"10.1109/VRW58643.2023.00087","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00087","url":null,"abstract":"This study examines the design and usability of a new extended reality (XR) training device, XR Vest, which enhances demonstration-based training (DBT) by combining two viewpoints (first- and third-person views). Traditional DBT methods can lack engagement and pose hazards, leading to poor retention. XR technology is used to improve engagement, safety, and immersion. The XR Vest, worn by a trainer, allows for a first-person view of immersive XR environments on an integrated screen while also displaying the trainer's movements in third-person. 28 participants completed training sessions using the XR Vest and a computer monitor. NASA Task Load Index questionnaires and System Usability Scale questionnaires were used to evaluate the effectiveness of the XR Vest. Results showed the XR Vest decreased cognitive load and improved usability. Further research should investigate the benefits of this design in other areas using XR technology for DBT.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114298300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengjiao Zou, Jennifer Ogle, Weimin Jin, P. Gerard, Daniel Petty, Andrew C. Robb
{"title":"Pedestrian Behavior Interacting with Autonomous Vehicles: Role of AV Operation and Signal Indication and Roadway Infrastructure","authors":"Fengjiao Zou, Jennifer Ogle, Weimin Jin, P. Gerard, Daniel Petty, Andrew C. Robb","doi":"10.1109/VRW58643.2023.00253","DOIUrl":"https://doi.org/10.1109/VRW58643.2023.00253","url":null,"abstract":"Interacting with pedestrians is challenging for Autonomous vehicles (AVs). This study evaluates how AV operations /associated signaling and roadway infrastructure affect pedestrian behavior in virtual reality. AVs were designed with different operations and signal indications, including negotiating with no signal, negotiating with a yellow signal, and yellow/blue negotiating/no-yield indications. Results show that AV signal significantly impacts pedestrians' accepted gap, walking time, and waiting time. Pedestrians chose the largest open gap between cars with AV showing no signal, and had the slowest crossing speed with AV showing a yellow signal indication. Roadway infrastructure affects pedestrian walking time and waiting time.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116727148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}