Chao Liu, Rongkai Shi, Nan Xiang, Jieming Ma, Hai-Ning Liang
{"title":"A Low-cost Efficient Approach to Synchronize Real-world and Virtual-world Objects in VR via In-built Cameras","authors":"Chao Liu, Rongkai Shi, Nan Xiang, Jieming Ma, Hai-Ning Liang","doi":"10.1145/3574131.3574439","DOIUrl":"https://doi.org/10.1145/3574131.3574439","url":null,"abstract":"Virtual reality (VR) technology has become a growing force in entertainment, education, science, and manufacturing due to the capability of providing users with immersive experiences and natural interaction. Although common input devices such as controllers, gamepads, and trackpads have been integrated into mainstream VR systems for user-content interaction, they cannot provide users with realistic haptic feedback. Some prior work tracks and maps the physical objects into the virtual space to allow users to interact with these objects directly, which improves users’ sense of reality in the virtual environment. However, most of them use additional hardware sensors, which inevitably increases the cost. In this research, a lightweight approach is proposed to synchronize the positions and motions between physical and digital objects without any extra costs. We use the real-time captured video data from in-built cameras on a VR headset and employ feature points based algorithms to generate projections of the physical objects in the virtual world. Our approach does not rely on additional sensors but just uses components available in a VR headset. Our approach allows users to interact with target objects with their hands directly without the need for specially designed trackers, markers, and other hardware devices as used in previous work. With our approach, users can get more realistic operational feedback when interacting with corresponding virtual objects.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Ge, Zhenyang Zhu, Keisuke Ichinose, I. Fujishiro, M. Toyoura, K. Go, K. Kashiwagi, Xiaoyang Mao
{"title":"A Study on Contextual Task Performance of Simulated Homonymous Hemianopia Patients with Computational Glasses-based Compensation","authors":"Chao Ge, Zhenyang Zhu, Keisuke Ichinose, I. Fujishiro, M. Toyoura, K. Go, K. Kashiwagi, Xiaoyang Mao","doi":"10.1145/3574131.3574441","DOIUrl":"https://doi.org/10.1145/3574131.3574441","url":null,"abstract":"People with Homonymous Hemianopia (HH) suffer from losing ipsilateral half side of visual field in both eyes, which results in failing to obtain visual information in the lost field. Making using of the remaining of the visual field, the state-of-the-art studies proposed Overlaid Overview Window (OOW) and Edge Indicator (EI) on the basis of Augmented-Reality (AR) glasses for compensation. However, experiments conducted in these studies investigate user performance with tasks involving events in lost field or remaining field singly. On the other hand, both studies recruited normal individuals for mock experiment, while their way to simulate HH, which requiring the participants to fix their view angles, were not practical to real HH patients. In this study, we conduct a contextual information experiment to investigate the user performance involving in the task requiring the information across both the visible and invisible sides of HH, with the compensation of OOW and Flicker-based EI (FEI). At the same time, we also recruit volunteers with normal vision for mock experiment, while the participants in our study are allowed to move their gaze freely, because we simulate the invisible field of HH on AR glasses with eye tracking. The experiment results showed that OOW is better for the task that related to move something from the remaining FoV to the lost FoV, while FEI is better for moving something from the lost FoV to the remaining FoV.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"552 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teleoperation of a Fast Omnidirectional Unmanned Ground Vehicle in the Cyber-Physical World via a VR Interface","authors":"Yiming Luo, Jialin Wang, Yushan Pan, Shan Luo, Pourang Irani, Hai-Ning Liang","doi":"10.1145/3574131.3574432","DOIUrl":"https://doi.org/10.1145/3574131.3574432","url":null,"abstract":"This paper addresses the relations between the artifacts, tools, and technologies that we make to fulfill user-centered teleoperations in the cyber-physical environment. We explored the use of a virtual reality (VR) interface based on customized concepts of Worlds-in-Miniature (WiM) to teleoperate unmanned ground vehicles (UGVs). Our designed system supports teleoperators in their interaction with and control of a miniature UGV directly on the miniature map. Both moving and rotating can be done via body motions. Our results showed that the miniature maps and UGV represent a promising framework for VR interfaces.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"236 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121161504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3DCMM: 3D Comprehensive Morphable Models for Accurate Head Completion","authors":"J. Zhang, Y. Luximon, Lei Zhu, Ping Li","doi":"10.1145/3574131.3574435","DOIUrl":"https://doi.org/10.1145/3574131.3574435","url":null,"abstract":"3D head completion aims at recovering accurate 3D full-head geometry from 2D face images or 3D face scans. Previous 3D shape reconstruction studies primarily focused on the facial region, but ignored the scalp region. Moreover, as critical foundations in 3D head completion, powerful 3D head morphable models, however, are scarce. In this paper, we construct 3D comprehensive morphable models (3DCMM) of human faces and scalps, and develop a novel 3DCMM-based stepwise 3D full-head creation pipeline: reconstructing face regions firstly, and then completing scalp regions. Firstly, large-scale 3D heads from 2,528 identities were parameterized to construct powerful 3DCMM as our foundations. Then, a 3DCMM-based supervised converting method was presented to predict an accurate scalp region from a facial region and produce full-head geometry. Extensive experiments and comparisons demonstrated that our 3DCMM possesses better quality and descriptive power. Benefiting from this, our model-based 3D head completion method has higher accuracy than model-based fitting method.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115438343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contour-constrained Specular Highlight Detection from Real-world Images","authors":"Chenlong Wang, Zhongqi Wu, Jianwei Guo, Xiaopeng Zhang","doi":"10.1145/3574131.3574461","DOIUrl":"https://doi.org/10.1145/3574131.3574461","url":null,"abstract":"Specular highlight detection is a fundamental research topic in computer graphics and computer vision. In this paper, we present a new full-scale deep supervision model to detect specular highlights from single real-world images. The core of our approach is a novel self-attention module to improve the detection accuracy of the network. We also introduce a refinement strategy with a new loss function for highlight detection task by generating contour maps from the highlight detection masks. Experiments on a public dataset demonstrate that our approach outperforms state-of-the-art methods for highlight detection.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129584821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asset Cloud - China Codes: Design of the Digital Identity Codes of State-owned Assets of China","authors":"Yijun Li, Zhiqiang Xing","doi":"10.1145/3574131.3574436","DOIUrl":"https://doi.org/10.1145/3574131.3574436","url":null,"abstract":"Data are assets, media are cognition, arts are innovation and algorithms are design. Breaking away from norms, the Asset Cloud - China Codes represents a static visual image application utilizing typical patterns, which gives rise to a “dynamical visual image” system based on the variables of state-owned data assets that constantly evolves and is uniformly integrated under the familial language comprised of visual symbols. It is the first case of algorithm driving the design of a visual image recognition system in the era of the Internet of Things, as well as a forward-looking design for government management and AI-powered data management in the future.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128112389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Hu, Wenxiao Wang, Kai Chan, Zhian Chen, Chaolan Tang, Ping Li
{"title":"Graphic Design and Evaluation of an Augmented Reality for Advergame","authors":"B. Hu, Wenxiao Wang, Kai Chan, Zhian Chen, Chaolan Tang, Ping Li","doi":"10.1145/3574131.3574462","DOIUrl":"https://doi.org/10.1145/3574131.3574462","url":null,"abstract":"This letter attempts to introduce an Augmented Reality Advergame (ARA), which is designed to eliminate the pain-points present of current advergames. i.e., cumbersome interface design, excessive entertainment elements lead to user distraction, and low participation among middle-aged and elderly users. For this purpose, our ARA improves color scheme, optimizes graphic proportions, and reduces typographical complexity of interface design. Furthermore, we introduce an augmented reality way in interaction. Eye-tracking studies demonstrate that ARA is better at directing user attention to key information than the current advergames. Perceptive studies confirm that ARA user engagement is 25% higher than the control advergame.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122652763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Generation of Leg Animation for Walking-in-Place Techniques","authors":"Jingbo Zhao, Zhetao Wang, Yiqin Peng, Yaojun Wang","doi":"10.1145/3574131.3574446","DOIUrl":"https://doi.org/10.1145/3574131.3574446","url":null,"abstract":"Generating forward-backward self-representation leg animation in virtual environments for walking-in-place (WIP) techniques is an underexplored research topic. A challenging aspect of the problem is to find an appropriate mapping from tracked vertical foot motion to natural cyclical movements of real walking. In this work, we present a kinematic approach based on animation rigging to generating real-time leg animation. Our method works by tracking vertical in-place foot movements of a user with a Kinect v2 sensor and mapping tracked foot height to inverse kinematics (IK) targets. These IK targets were aligned with an avatar's feet to guide the virtual feet to perform cyclic walking motions. We conducted a user study to evaluate our approach. Results showed that the proposed method produced compelling forward-backward leg animation during walking. We show that the proposed technique can be easily integrated into existing WIP techniques.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132799181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Fujishiro, Zhigeng Pan, N. Magnenat-Thalmann, S. N. Spencer, Masaki Oshita, Xubo Yang, H. Yang
{"title":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","authors":"I. Fujishiro, Zhigeng Pan, N. Magnenat-Thalmann, S. N. Spencer, Masaki Oshita, Xubo Yang, H. Yang","doi":"10.1145/2407516","DOIUrl":"https://doi.org/10.1145/2407516","url":null,"abstract":"The city of Singapore proudly hosts the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI 2008), on 8-9 December 2008. This year, the conference will take place just before SIGGRAPH Asia 2008. \u0000 \u0000An exciting VRCAI 2008 awaits participants from both academia and industry in Singapore - a hotbed of innovation - where state-of-the-art technologies and applications in the Virtual Reality Continuum (VRC) will be explored and presented. Spanning across next-generation infocommunication environments like Virtual Reality (VR), Augmented Virtuality (AV), Augmented Reality (AR) and Mixed Reality (MR), VRC is key in the way we define and interact, with and within, our virtual worlds. Advances in research and novel applications in this field have revolutionized much of our leisure activities, making them more appealing and fun. \u0000 \u0000Just as significantly, these advances provide the foundation for more effective interactivity in work- and learning-related activities. \u0000 \u0000VRCAI 2008 will focus on the following main themes: Fundamentals, Systems, Interactions, and Industry and Applications in the VRC. \u0000 \u0000Virtual Reality Continuum (VRC), which spans across next-generation info communication environments like Virtual Reality (VR), Augmented Virtuality (AV), Augmented Reality (AR), and Mixed Reality (MR), is key in the way we define and interact with, and within, our virtual worlds. Advances in research and novel applications in this field have revolutionized much of our leisure activities, making them more appealing and fun. Just as significantly these advances provide the foundation for more effective interactivity in work- and learning-related activities. To advance research in the VRC field, the VRCAI conference seeks to provide a forum for scientists, researchers, developers, users and industry players in the international VRC community to come together to share experiences, exchange ideas and spur one another in the knowledge of this fast-growing field. This year, VRCAI 2008 focuses on the following themes: Fundamentals, Systems, Interactions and Industry and Applications. A new focus area on Interactions had been introduced. We see this as an area of growing importance as more and more user applications are developed for VRC as VRC progresses from research into the industry. Thus, how we interact with the systems in such applications will determine the success of VRC technology adoption by users. \u0000 \u0000Since its beginnings as the 1st International Workshop on Virtual Reality and Visualization in Scientific Computing held in Hangzhou, China, in 1995, VRCAI had grown and matured into a reputable biennial conference. Selected papers from past VRCAI conferences had been published in special issues of Computers & Graphics journal, International Journal of Virtual Reality, International Journal of Image and Graphics, and Computer Animation and Virtual Worlds. Since 2004, the conference had been indexed by EI.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121207895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}