2021 IEEE 7th International Conference on Virtual Reality (ICVR)最新文献

筛选
英文 中文
Research on Real-Time Rendering of Reflection Caustics in Water Scenes 水景反射焦散的实时渲染研究
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483863
Huiling Guo, Sai Wang, Yong Tang, Ying Li, Jing Zhao
{"title":"Research on Real-Time Rendering of Reflection Caustics in Water Scenes","authors":"Huiling Guo, Sai Wang, Yong Tang, Ying Li, Jing Zhao","doi":"10.1109/ICVR51878.2021.9483863","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483863","url":null,"abstract":"To compensate for the lack of water surface reflection caustics in the real-time rendering of water scenes, a method for simulating the water surface reflection caustics is proposed. Firstly, the model of water surface height field is constructed by introducing the Inverse Fast Fourier Transform, and the reflection calculation area of the water surface light is reduced by using the space division method. Secondly, considering the angle of the reflected light to the surface of the object, the positional relationship between the sunlight and the hull, as well as other factors, the value of the reflected light received at the sampling point is calculated. For the lack of projection occlusion, the technology of rendering to texture is introduced to improve the traditional method of the projection texture, and draw the occluded texture details. Finally, a large number of experimental results are compared with real photographs and related literature. The results demonstrate that the proposed method can effectively simulate the reflection caustic effect of the water surface, increase the details and the realism, and meet the real-time efficiency.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131981514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR Technology and Application in Martial Arts VR技术及其在武术中的应用
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483700
Zeng Yuqing, Mingliang Cao, H. Zhang, Zhong Yong
{"title":"VR Technology and Application in Martial Arts","authors":"Zeng Yuqing, Mingliang Cao, H. Zhang, Zhong Yong","doi":"10.1109/ICVR51878.2021.9483700","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483700","url":null,"abstract":"Virtual Reality (VR) is one of the important technologies in the 21st century because of its advantages of comfortable immersion and telepresence, which provides people with a more natural way of interaction and immersive space for activities. The purpose of this study is to collect and summarize the application of martial arts in VR from multiple perspectives, aiming to classify the virtual martial arts system according to the feedback mode provided by the Virtual Environment (VE), the function mode of martial arts and the key needs of users. The review is organized according to three perspectives: feedback mode, function mode, and user mode. Our review work is useful for both researchers and educators to develop virtual martial arts systems for enhancing the protection and inheritance of martial arts.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126789319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motion Estimation with L0 Norm Regularization 基于L0范数正则化的运动估计
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483834
Jun Chen, Zemin Cai, Xiaohua Xie, Jianhuang Lai
{"title":"Motion Estimation with L0 Norm Regularization","authors":"Jun Chen, Zemin Cai, Xiaohua Xie, Jianhuang Lai","doi":"10.1109/ICVR51878.2021.9483834","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483834","url":null,"abstract":"In this paper, we proposed a novel variational optical flow model with L0 norm regularization, which uses a sparse flow gradient counting scheme, and can globally control how many non-zero flow gradients are preserved to recover important motion structures in a sparsity-control manner. It is particularly effective for enhancing major flow edges while eliminating a manageable degree of low-amplitude motion structures to control smoothing and reduce oversegmentation artifacts. Different from other edge-preserving smoothing regularizers, it does not depend on local motion features, but locates important flow edges globally. It will not cause edge blurriness due to avoiding local filtering or average operation, and can preserve salient motion structures, even small-scale motion structures with high contrast can be preserved remarkably well. Benefited from the advantages of L0 norm regularization, the proposed optical flow method shows outstanding performance in sharpening major flow edges, flattening insignificant motion details to control smoothing and reduce oversegmentation artifacts while preserving fine-scale motion structures with high contrast. It also achieves good performance on the Middlebury dataset.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research on Swimming Training Based on Numerical Simulation and VR Technology 基于数值模拟和VR技术的游泳训练研究
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483864
Zhiya Chen, Tianzeng Li, Jin Yang
{"title":"Research on Swimming Training Based on Numerical Simulation and VR Technology","authors":"Zhiya Chen, Tianzeng Li, Jin Yang","doi":"10.1109/ICVR51878.2021.9483864","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483864","url":null,"abstract":"Computer virtual reality (VR) technology is a new technology widely used in many fields. In this study, three-dimensional numerical simulation of swimming training is carried out, and the resistance, propulsion and flow field in the swimming process are visualized through virtual reality technology, so that the swimming coach can intuitively judge whether the swimming movement is qualified and improve the skill. Numerical simulation technology analysis is the key technology of swimming training. Combined with virtual reality technology, swimming training can break the limitation of water and improve the training effect. The velocity and vortex structure of the flow field obtained from the study provide the basis for the generation mechanism of resistance and propulsion, and show good results for improving swimming training.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating an Augmented Reality-Based Partially Assisted Approach to Remote Assistance in Heterogeneous Robotic Applications 评估基于增强现实的部分辅助方法在异构机器人应用中的远程辅助
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483849
D. Calandra, Alberto Cannavò, F. Lamberti
{"title":"Evaluating an Augmented Reality-Based Partially Assisted Approach to Remote Assistance in Heterogeneous Robotic Applications","authors":"D. Calandra, Alberto Cannavò, F. Lamberti","doi":"10.1109/ICVR51878.2021.9483849","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483849","url":null,"abstract":"Among the countless applications of Augmented Reality (AR) in the industry, remote assistance represents one of the most prominent and widely studied use cases. Recently, the way in which assistance can be delivered started to evolve, unleashing the full potential of such technology. New methodologies have been proposed able to foster operators’ autonomy and reduce under-utilization of skilled human resources. This paper studies the effectiveness of a recently proposed approach to AR-based remote assistance, referred to as partially assisted, which differs from the traditional step-by-step guidance in the way the AR hints are conveyed by the expert to the operator. The suitability of this approach has been proved already for a number of simple industrial tasks, but a comprehensive study has yet to be performed for validating its effectiveness in complex use cases. This paper addresses this lack by considering as a case study the mastering of a robotic manipulator, a procedure involving a number of heterogeneous operations. The performance of the partially assisted approach is compared with step-by-step guidance based on both objective and subjective metrics. Results showed that the former approach could be particularly effective in reducing the time investment for the expert, allowing the operator to autonomously complete the assigned task in a time comparable to traditional assistance with a negligible need for further support.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116806233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-Time Instance Segmentation Tracking Algorithm in Mixed Reality 混合现实中的实时实例分割跟踪算法
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483810
Dengsha Yu, Zifei Yan, Baolin Ming
{"title":"Real-Time Instance Segmentation Tracking Algorithm in Mixed Reality","authors":"Dengsha Yu, Zifei Yan, Baolin Ming","doi":"10.1109/ICVR51878.2021.9483810","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483810","url":null,"abstract":"In a mixed reality environment, in order to complete the interaction of manipulating virtual items with physical wooden sticks, a real-time and accurate object tracking algorithm is needed. Therefore, we design a fast, pixel-wised object tracking model to quickly and accurately segment wooden sticks in each frame. The model consists of two parts: The first part is an object detection model, which is responsible for identifying and detecting the bounding box of the object in the first frame; The second part is the instance segmentation model, which uses the bounding box of the object obtained in the previous frame and the current frame’s image features (extracted by the convolutional neural networks) to calculate the object boundary points in the current frame. In addition, we use dynamic convolutions in the CNNs to increase the representation capacity of the feature extraction parts without increasing the depth or width of the network. Experiments show that under the task of tracking stick objects in a mixed reality environment, our method achieves competitive performance in real-time running speed and segmentation quality.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"60 31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131070240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved YOLOv3 Object Detection Network for Mobile Augmented Reality 一种改进的YOLOv3移动增强现实目标检测网络
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483829
Quanyu Wang, Zhi Wang, Bei Li, Dejian Wei
{"title":"An Improved YOLOv3 Object Detection Network for Mobile Augmented Reality","authors":"Quanyu Wang, Zhi Wang, Bei Li, Dejian Wei","doi":"10.1109/ICVR51878.2021.9483829","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483829","url":null,"abstract":"With the spread of mobile devices such as mobile phones, MAR(Mobile augmented reality), which is a technology that realizes augmented reality on mobile devices, is becoming one of the most popular directions in augmented reality research. In MAR, the capturing and positioning of target objects, that is, tracking and registration technology is a crucial problem. In mobile devices, tracking registration technologies that use cam-eras as tracking sensors are divided into hardware sensor-based and computer vision-based tracking registration technologies. Compared with the former, the latter has the characteristics of low hardware equipment requirements and high accuracy. However, traditional computer vision-based tracking registration technologies are susceptible to factors such as background environment, distance, and angle. To overcome the weakness, our research combines the development of deep learning in the field of object detection and lightens YOLOV3 network, which includes simplifying the network structure, improving multi-scale feature fusion detection, optimizing the dimensions of candidate frames through clustering, and optimizing the loss function, so that the object detection network can be used on mobile devices with guaranteed accuracy, and reduces the influence of background environment and other factors on the visual tracking registration technology. Our research realizes a mobile augmented reality system based on the IOS system, which achieves state-of-the-art performance.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Virtual Reality Training Environment for Electric Systems 电气系统虚拟现实训练环境
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483825
Zhenjun Jiang, Yang Yang, Qingshu Yuan, Pengfei Leng, Yanyan Liu, Zhigeng Pan
{"title":"Virtual Reality Training Environment for Electric Systems","authors":"Zhenjun Jiang, Yang Yang, Qingshu Yuan, Pengfei Leng, Yanyan Liu, Zhigeng Pan","doi":"10.1109/ICVR51878.2021.9483825","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483825","url":null,"abstract":"Virtual Reality (VR) is widely used in the training of using electric systems, including the teaching of the knowledge of physics. In this paper, an application program of virtual training environment is developed to provide training environment of assembly and maintenance for workers engaged in transformer industry. The application is developed using the Unity3D game engine and has three modes: learning mode, training mode and exam mode. Improve the skills of professionals by visualizing the composition of transformer equipment and the operation of different processes. This environment allows workers to interact with virtual devices, thereby gaining experience with assembly tasks while reducing the risk of production accidents. Its safe and reliable operation is not only related to the power quality of users, but also is crucial to the safety of the entire power system, so the transformer assembly and maintenance must follow strict procedures and operating specifications.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126930585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Improved GRU Network for Human Motion Prediction 一种改进的GRU网络用于人体运动预测
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483851
Weijie Yu, R. Liu, D. Zhou, Qiang Zhang, Xiaopeng Wei
{"title":"An Improved GRU Network for Human Motion Prediction","authors":"Weijie Yu, R. Liu, D. Zhou, Qiang Zhang, Xiaopeng Wei","doi":"10.1109/ICVR51878.2021.9483851","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483851","url":null,"abstract":"Human motion prediction is a research field with broad application prospects. With the development of deep learning, researchers have used advanced deep-learning algorithms in this field. This paper aims to combine GRU with 1D-CNN without increasing network parameters. In this paper, we use GRU to learn the continuity of human movement, and then use one-dimensional convolution networks to reduce the dimensions and to generate predicted actions. At last, we utilize the motion weight matrix which uses simple operations to get the weight from its own motion data, so as to improve the model's robustness. The test results on Human3.6M database show that our method gets a good result in short-term prediction and performs better than other GRU-based methods in long-term prediction. Other network using our ideas can also improve the prediction effect.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"1984 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Autonomous Landing Point Retrieval Algorithm for UAVs Based on 3D Environment Perception 基于三维环境感知的无人机自主着陆点检索算法
2021 IEEE 7th International Conference on Virtual Reality (ICVR) Pub Date : 2021-05-20 DOI: 10.1109/ICVR51878.2021.9483840
Zhanpeng Gan, Huarong Xu, Yuanrong He, W. Cao, Guanhua Chen
{"title":"Autonomous Landing Point Retrieval Algorithm for UAVs Based on 3D Environment Perception","authors":"Zhanpeng Gan, Huarong Xu, Yuanrong He, W. Cao, Guanhua Chen","doi":"10.1109/ICVR51878.2021.9483840","DOIUrl":"https://doi.org/10.1109/ICVR51878.2021.9483840","url":null,"abstract":"With the development of science and technology, UAV has been applied to all aspects of life. Computer vision technology has been used as a common external sensing technology because of its low cost, high reliability and good accuracy. The autonomous landing of UAV is an important technology of autonomous flight. The common visual guidance landing of UAV is based on the ground target detection of cooperative target and the output of UAV landing point position. This method requires the UAV to accurately identify the established ground cooperative target, so when the target is lost or the UAV needs to make a forced landing, the UAV will not be able to land normally, in serious cases, it will lead to UAV damage. Therefore, in order to solve the problem of UAV autonomous landing in unknown environment, this paper proposes an autonomous landing point retrieval algorithm based on UAV 3D environment perception. First, this paper will conduct a preliminary road search at a certain height to obtain information on both sides of the road. Secondly, the external 3D point cloud environment is reconstructed in real time through the binocular camera, the pre-landing plane is found by plane fitting, and the 3D information is mapped to the 2D plane to extract the plane mask, and finally the random forest is used to determine the landing point. Through experimental analysis, the algorithm in this paper can better guide the UAV to land in an unknown environment.","PeriodicalId":266506,"journal":{"name":"2021 IEEE 7th International Conference on Virtual Reality (ICVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122950318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信