Yinghan Shi, Lizhi Zhao, Xuequan Lu, Thuong N. Hoang, Meili Wang
{"title":"Grasping 3D Objects With Virtual Hand in VR Environment","authors":"Yinghan Shi, Lizhi Zhao, Xuequan Lu, Thuong N. Hoang, Meili Wang","doi":"10.1145/3574131.3574428","DOIUrl":"https://doi.org/10.1145/3574131.3574428","url":null,"abstract":"Virtual hands are typically hidden after grasping an object in Virtual Reality (VR) environments in existing virtual hand grasping works, which significantly impact user experience. In this paper, we build a real-time, flexible, robust and natural virtual hand grasping system, in which the virtual hand is an avatar of the user’s real hand, and a user can control the virtual hand to grasp different rigid objects naturally and realistically in the VR environment by grasping the controller handle. Our method involves three modules: Grasping Detection module for detecting whether there is a graspable object, Hand-Object Connection module for attaching the graspable object to the virtual hand, and Finger Bending module for controlling fingers to bend to grasp objects. We conduct experiments on using virtual hand to grasp rigid objects and manipulate physical tools, and show that the virtual hand can fit the object very well. Also, we compare our method with the above hand hidden technique using two created VR scenarios, which demonstrates the superiority of our method.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115223423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Sense of Embodiment through Out-of-Body Experience and Tactile Feedback","authors":"Dixuan Cui, Christos Mousas","doi":"10.1145/3574131.3574456","DOIUrl":"https://doi.org/10.1145/3574131.3574456","url":null,"abstract":"Out-of-body experience (OBE) is generated by sensory disintegration. In virtual reality (VR), we can provide OBE to people by switching the first-person perspective (1PP) to the third-person perspective (3PP). Generally, 1PP is the choice for high body ownership and presence. Moreover, tactile feedback that is experienced from the 1PP can provide a higher immersive experience. However, whether the combination of 3PP and tactile feedback could affect the sense of embodiment in immersive environments is underexplored. Thus, we conducted a 2 × 2 (OBE: 1PP vs. 3PP × Tactile Feedback [TF]: with vs. without tactile feedback) VR study to discover the effect of OBE in the presence of TF. In our study, we examined OBE and TF through the five dimensions of the sense of embodiment: body ownership, agency, tactile sensations, location of the body, and response to external stimuli. We developed an application to replicate the rubber hand illusion (RHI) study with partial body tracking. We found significant results for both OBE and TF in different dimensions of embodiment. Specifically, we revealed that 3PP decreased the body’s sense of body ownership, agency, and location. Moreover, enabling tactile feedback induced tactile sensations and responses to external stimuli. In the remainder of this paper, we discuss our findings and limitations and provide directions for future studies on OBE in VR.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123011133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-View Fusion for Sign Language Recognition through Knowledge Transfer Learning","authors":"Liqing Gao, Lei Zhu, Senhua Xue, Liang Wan, Ping Li, Wei Feng","doi":"10.1145/3574131.3574434","DOIUrl":"https://doi.org/10.1145/3574131.3574434","url":null,"abstract":"Word-level sign language recognition (WSLR), which aims to translate a sign video into one word, serves as a fundamental task in visual sign language research. Existing WSLR methods focus on recognizing frontal view hand images, which may hurt performance due to hand occlusion. However, non-frontal view hand images contain complementary and beneficial information that can be used to enhance the frontal view hand images. Based on this observation, the paper presents an end-to-end Multi-View Knowledge Transfer (MVKT) network, which, to our knowledge, is the first SLR work to learn visual features from multiple views simultaneously. The model consists of three components: 1) the 3D-ResNet backbone, to extract view-common and view-specific representations; 2) the Knowledge Transfer module, to interchange complementary information between views; and 3) the View Fusion module, to aggregate discriminative representations for obtaining global clues. In addition, we construct a Multi-View Sign Language (MVSL) dataset, which contains 10,500 sign language videos synchronously collected from multiple views with clear annotations and high quality. Extensive experiments on the MVSL dataset shows that the MVKT model trained with multiple views can achieve significant improvement when tested with either multiple or single views, which makes it feasible and effective in real-world applications.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128428325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheng Zhang, You Li, Xiangrong Zeng, Sheng Tan, Changhuan Jiang
{"title":"Dynamic Surface Capture for Human Performance by Fusion of Silhouette and Multi-view Stereo","authors":"Zheng Zhang, You Li, Xiangrong Zeng, Sheng Tan, Changhuan Jiang","doi":"10.1145/3574131.3574450","DOIUrl":"https://doi.org/10.1145/3574131.3574450","url":null,"abstract":"We present a multi-camera based 3D dynamic surface capture solution, which supports high-fidelity generation of 3D dynamic content for various performance scenes. Our system uses a set of RGB cameras to synchronously acquire scene image sequences and performs a processing pipeline to produce 4D videos. We propose a multi-view point cloud reconstruction method which integrates volumetric based guidence and contraint into a coarse-to-fine depth estimation framework. It gives accurate point cloud models and can handle well with the scenes where textureless objects and multiple subjects present. We also present new methods and introduce modifications for several other key computation modules of the processing pipeline, including foreground segmentation, multi-camera calibration, mesh surface reconstruction and registration, texturing, 4D video compression, etc. Experiments on captured scenes show that our system can produce highly accurate and realistic 4D models of the human performances. We develop a 4D video player and toolkit plugins, and demonstrate the use of integrating the 4D content in VR and AR applications.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130820577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guoshuai Li, Bin Cheng, Luoyu Cheng, Chongbin Xu, Xiaomin Sun, Pu Ren, Yong Yang, Qian Chen
{"title":"Arbitrary Style Transfer with Semantic Content Enhancement","authors":"Guoshuai Li, Bin Cheng, Luoyu Cheng, Chongbin Xu, Xiaomin Sun, Pu Ren, Yong Yang, Qian Chen","doi":"10.1145/3574131.3574454","DOIUrl":"https://doi.org/10.1145/3574131.3574454","url":null,"abstract":"Arbitrary style transfer is an import topic which changes the style of a source image according to a reference one. It is useful for artistic creation and intelligent imaging applications. The main challenge of the style transfer is that it is difficult to balance the semantic feature transformation and original semantic content. In this paper, we introduce a semantic content enhancement module to mitigate the affect of color distribution and semantic feature transformation for the style transfer while keeping the original semantic structure as much as possible. Meanwhile, we also introduce a channel attention module to enhance the style features by fusing with the style attention network. With the enhancement of both features, our network achieves excellent result that balances original semantic structure and transfer stylized visualization. In addition, we also migrate the algorithm to 3D space and it also performs stably for 3D scene-based style transfer. Experiments show that our method can handle various style transfer tasks.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132509967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PM4VR: A Scriptable Parametric Modeling Interface for Conceptual Architecture Design in VR","authors":"Wanwan Li","doi":"10.1145/3574131.3574442","DOIUrl":"https://doi.org/10.1145/3574131.3574442","url":null,"abstract":"In this paper, we propose PM4VR, a novel scriptable parametric modeling interface for the Unity3D game engine which can be applied to VR-driven parametric modeling designs. By simplifying prevailing advanced programming languages such as C# and Java, we propose another programming language, named Java♭, to simplify the grammar and lower the programmer’s learning curve. By implementing a series of advanced parametric modeling techniques, we integrate our Java♭ compiler virtual machine with those functionalities which can facilitate interactive parametric modeling design process on the Unity3D game engine within immersive SteamVR environments. More specifically, in this paper, we introduce the Java♭ programming language, explain the implementation details of Java♭ compiler virtual machine, and discuss the experimental results of the interactive parametric modeling on conceptual architecture designs using PM4VR. Besides, a Supplementary Material with Java♭ programming examples is included.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122546238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel scheme to developing printing protocol with the required surface roughness","authors":"Zening Men, Li-dong Zhao, Feng Yang, Mengcheng Jiang, Lifang Wu, Zun Li","doi":"10.1145/3574131.3574460","DOIUrl":"https://doi.org/10.1145/3574131.3574460","url":null,"abstract":"3D printing has the characteristics of layer-by-layer manufacturing, which causes high surface roughness. This paper proposes a scheme to develop a printing protocol combining layer-wise and continuous printing based on the required surface roughness. It involves four modules: model slicing, printing pattern estimation, slices adjusting and printing protocol generation. Firstly, the candidate slices are obtained by model slicing based on the required surface roughness. Secondly, the printing pattern of each slice is estimated based on the max-min distance of the corresponding slice and Maximum Filled Distance (MFD) of the printing resin material. Then, the candidate slices are adjusted based on the maximum and minimum printable thickness of the printer. Finally, the printing protocol involving slice number, slice thickness, printing pattern and printing time is generated, and the surface roughness of the printed objects using the generated printing protocol can be estimated. Two printing protocols of the model cup with different required surface roughness are automatically generated. And two objects are printed based on the corresponding printing protocols. The roughness of the printed objects is measured using the roughness tester. The average roughness of the printed objects is smaller than the required roughness because the roughness of continuous printing is small. And the error between the measured and the predicted roughness is smaller than 2 µm.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127912303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junxiao Xue, Hao-Chun Yang, Minghchuang Zhang, Zekun Wang, Lei Shi
{"title":"Crowd Evacuation Conflicts Simulation Based Cellular Automaton Integrating Game Theory","authors":"Junxiao Xue, Hao-Chun Yang, Minghchuang Zhang, Zekun Wang, Lei Shi","doi":"10.1145/3574131.3574445","DOIUrl":"https://doi.org/10.1145/3574131.3574445","url":null,"abstract":"Developing and rehearsing crowd evacuation plans in crowd gathering situations can improve evacuation efficiency and reduce safety accidents. However, pedestrians can create resource conflicts with other pedestrians competing for evacuation routes during crowd evacuation. Inspired by cellular automata and game theory, this paper proposes a crowd evacuation model that integrates cellular automata and game theory to solve the conflicts among pedestrians in the evacuation process. In the model construction, we construct a basic crowd evacuation model using a cellular automaton, formulate a game rule for pedestrians’ conflict according to the prisoner’s dilemma, and integrate the pedestrians’ update strategy into the cellular automaton. The model’s validity is verified in the experiments by comparing the update and non-update strategies, and the crowd evacuation is visualized and simulated by constructing a stadium scenario. The experimental results show that the crowd evacuation model incorporating the pedestrian conflict game rules is more realistic and will improve the crowd evacuation efficiency if the pedestrians adopt the cooperative strategy.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114140996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deborah Rose Buhion, Michaela Nicole Dizon, Thea Ellen Go, Kenneth Neil Oafallas, Patrick Jaspher Joya, Alexandra Cyrielle Mangune, Sean Paulo Nerie, N. P. Del Gallego
{"title":"A Comparative Study of Two Marker-Based Mobile Augmented Reality Applications for Solving 3D Anamorphic Illusion Puzzles","authors":"Deborah Rose Buhion, Michaela Nicole Dizon, Thea Ellen Go, Kenneth Neil Oafallas, Patrick Jaspher Joya, Alexandra Cyrielle Mangune, Sean Paulo Nerie, N. P. Del Gallego","doi":"10.1145/3574131.3574443","DOIUrl":"https://doi.org/10.1145/3574131.3574443","url":null,"abstract":"Anamorphic illusions are a class of optical illusions wherein objects are perspectively distorted in some way so that the object becomes recognizable when viewing them from a certain point of view or direction. We develop two marker-based mobile augmented reality applications to demonstrate 3D anamorphic illusions. We frame this as a puzzle-solving mechanic where users must align the anamorphic pieces through the movement of the device camera to form a distinguishable virtual model. The first AR proposed utilizes 2D printable markers (2D marker-based AR). In contrast, the second AR uses tabletop items as markers, such as cereal boxes, tin cans, action figures, and the like (3D marker-based AR). The AR applications differ regarding scene setup, user interactions, and examples of anamorphic illusions. We sliced public 3D models, and the corresponding slices were randomly distributed in a given virtual space, using a camera viewpoint where the model would become recognizable. Our proposed framework ensures that each playthrough provides a new anamorphic illusion. Early user testing results show that our 2D-marker-based AR application is more effective in showcasing anamorphic illusions.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132266404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zexi Wang, Pan Chen, Yajie Qin, Jingsi Xie, Ying Wang, Bin Luo, Chengxiang Zhu, Juncong Lin, Y. You
{"title":"Digital Twin System for Propulsion Design of UAVs","authors":"Zexi Wang, Pan Chen, Yajie Qin, Jingsi Xie, Ying Wang, Bin Luo, Chengxiang Zhu, Juncong Lin, Y. You","doi":"10.1145/3574131.3574464","DOIUrl":"https://doi.org/10.1145/3574131.3574464","url":null,"abstract":"With the widespread applications of unmanned aerial vehicles (UAVs) in various domains, the need to develop specialized UAVs to perform various operations is increasing, which presents challenges to the design of UAVs. On the other hand, the digital twin has demonstrated its great potential in smart manufacturing and other areas. This paper introduces a digital twin system for the propulsion design of UAVs, developed with Matlab and Unity. Our system is armed with a trajectory planning and visual analysis component to help evaluation of UAV propulsion design effectively so that users can explore how different propulsion configurations can affect the trajectory and vice versa. The performance and effectiveness of the system are verified by testing various aspects such as FPS, calculation analysis, and network transmission performance.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134574047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}