2023 9th International Conference on Virtual Reality (ICVR)最新文献

筛选
英文 中文
Virtual Training System for Vascular Interventional Surgery 血管介入手术虚拟训练系统
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169546
Pan Li, Boxuan Xu, Delei Fang, Junxia Zhang, Xinghua Lin, Yan Zhang, Xinxin Zhang
{"title":"Virtual Training System for Vascular Interventional Surgery","authors":"Pan Li, Boxuan Xu, Delei Fang, Junxia Zhang, Xinghua Lin, Yan Zhang, Xinxin Zhang","doi":"10.1109/ICVR57957.2023.10169546","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169546","url":null,"abstract":"Vascular interventional surgery, as one of the most effective methods for the treatment of severe cardiovascular and cerebrovascular diseases, is mainly performed by sending metal guide wire to the lesion area. However, the intervention is challenging for new physicians who are not skilled in clinical operation, thus the virtual operation training system can effectively assist to improve the training process of physicians. In this paper, a training system is developed, which simulates the intervention of guide wire into the blood vessel by creating a virtual environment and pushing the guide wire to the focal area in curved blood vessels. The system is mainly divided into three parts: construction of the virtual operation environment, modeling and simulation of the guide wire and blood vessels, virtual interactive control. Through design and integration, the simulation training of guide wire involvement in blood vessel is realized.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125252880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Human Tracking Using Deep Learning with Shadow Clues 使用阴影线索的深度学习进行多人跟踪
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169452
Wei Gai, Chunxiao Xu, Xiyu Bao, Cheng Lin, Hongqiu Luan, Yu Wang, Guanqi Mu, Chenglei Yang
{"title":"Multiple Human Tracking Using Deep Learning with Shadow Clues","authors":"Wei Gai, Chunxiao Xu, Xiyu Bao, Cheng Lin, Hongqiu Luan, Yu Wang, Guanqi Mu, Chenglei Yang","doi":"10.1109/ICVR57957.2023.10169452","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169452","url":null,"abstract":"Occlusion is an inevitable problem in virtual reality systems where a single camera is used to track multiple players during the interaction. In this work, we observe that a user’s shadow is an important hint for tracking when the user is occluded. Therefore, we propose a novel tracking approach that effectively leverages the shadow information of target users, which leads to more robust tracking in complex environments. Our key idea is to train a shadow detection model based on Mask R-CNN to extract shadows from image frames. To handle different levels of occlusion, we propose to define a series of tracking statuses for occlusion representation. To better evaluate the proposed method, we also contribute a dataset that contains numerous image frames with various forms of human shadows. Experiments demonstrate that our method is not only effective for handling user tracking even in full and long-term occlusions, but also exhibits superior real-time efficiency.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"1860 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Design and Study of Facial Color Diagnosis System Based on Virtual Reality 基于虚拟现实的面部颜色诊断系统的设计与研究
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169227
Pengfei Bao, Bo Yuan, Juan Zhang, Youliang Huang
{"title":"The Design and Study of Facial Color Diagnosis System Based on Virtual Reality","authors":"Pengfei Bao, Bo Yuan, Juan Zhang, Youliang Huang","doi":"10.1109/ICVR57957.2023.10169227","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169227","url":null,"abstract":"Using the development of virtual simulation technology, the application of virtual reality technology and auxiliary design is mature, and the application of acupuncture point display and medical education is applied. It greatly simplifies the difficulty of the model and the difficulties Chinese medicine faces. This paper provides a technical solution for the facial color diagnosis system based on a new perspective, and the model of some Chinese medical facial color diagnosis is constructed, and the interactive process and query framework are designed by using the virtual interactive platform quest3d as the scene drive. This paper mainly discusses the design of other development function modules in the basis of the construction plan of the facial color diagnosis platform of traditional Chinese medicine, and the application of the technology and facial color diagnosis system is prospected in the virtual museum of traditional Chinese medicine.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132544310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera Field Calibration Method Using Collinear Point Target Joint Constraints 基于共线点目标关节约束的摄像机场标定方法
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169611
Zhiyuan Dang, Zhiyi Zhang, Zhenhua Wang
{"title":"Camera Field Calibration Method Using Collinear Point Target Joint Constraints","authors":"Zhiyuan Dang, Zhiyi Zhang, Zhenhua Wang","doi":"10.1109/ICVR57957.2023.10169611","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169611","url":null,"abstract":"A method based on a specially designed coplanar point target and multiple projection invariants is proposed for efficient and rapid on-site calibration of visual measurement systems. The main features of this method include the separation of the distortion model from the camera model, the accurate determination of the distortion center using the fundamental matrix, and the definition of the distortion measure function based on constraints such as coplanar vanishing point, weighted line distance, and cross ratio invariance. The optimal distortion coefficients are then obtained through optimization using the Levenberg-Marquardt algorithm, followed by linear calibration of the camera intrinsic and extrinsic parameters. Experimental results demonstrate that this algorithm can rapidly and accurately obtain camera parameters, and it is easy to implement on-site in industrial settings.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133388650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Task Estimation Method Based on Image Recognition and Its Application to EMG Prosthetic Hand Control 基于图像识别的任务估计方法及其在肌电假手控制中的应用
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169610
Shunji Hashiguchi, T. Shibanoki
{"title":"A Task Estimation Method Based on Image Recognition and Its Application to EMG Prosthetic Hand Control","authors":"Shunji Hashiguchi, T. Shibanoki","doi":"10.1109/ICVR57957.2023.10169610","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169610","url":null,"abstract":"In this paper, we propose a myoelectric prosthetic hand that enables stable control in various daily life activities. The proposed prosthetic hand includes a state transition model for daily life activities. Each state is estimated using a neural network based on the user’s activity state captured by a camera attached to the prosthetic hand. By limiting unnecessary motions based on the estimation result, accurate control can be achieved by assisting in the recognition of movements using electromyogram signals. In the experiments, the participant was asked to operate the proposed prosthetic hand including the proposed model, and it was demonstrated that participants could freely control the prosthetic hand in various daily life situations.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"52 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114133842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of A Virtual Reality Environment with Error-Less Learning for Rehabilitation of Aging Adults with Stroke in IADLs 基于无错误学习的虚拟现实环境对老年脑卒中患者康复的影响
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169847
Andrew Quinlan, Richard O. Oyeleke
{"title":"Design of A Virtual Reality Environment with Error-Less Learning for Rehabilitation of Aging Adults with Stroke in IADLs","authors":"Andrew Quinlan, Richard O. Oyeleke","doi":"10.1109/ICVR57957.2023.10169847","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169847","url":null,"abstract":"Instrumental activities of daily living (IADLs) refer to self-care tasks that are learned, complex, and involve sequential steps for their execution. The ability to perform IADLs independently can improve the quality of life. However, neurological disorders such as stroke limits aging adults ability to perform IADLs independently which affects their quality of life. Physical therapist employ error-less learning technique to help aging adults to relearn their IADLs skills after stroke in naturalistic settings, however, this predisposes them to risk of accident (e.g., fall, fire accident) when IADLs execution instructions are not adhered to. To that end, this work presents a design of an immersive virtual reality environment for retraining aging adults in IADLs skills to mitigate the risk of accidents from incorrect execution of IADLs.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114276628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Cultural Heritage Digital Twin: Concept, Characteristics, Framework and Applications 迈向文化遗产数字孪生:概念、特征、框架与应用
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169702
Li Xin, Gu Hongyu, Seo Eun Kyeong, Wu Qitao, Yin Guojun, Deng Bangkun
{"title":"Towards Cultural Heritage Digital Twin: Concept, Characteristics, Framework and Applications","authors":"Li Xin, Gu Hongyu, Seo Eun Kyeong, Wu Qitao, Yin Guojun, Deng Bangkun","doi":"10.1109/ICVR57957.2023.10169702","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169702","url":null,"abstract":"The introduction of the digital twin has promoted the development of the lifecycle management of cultural heritage and the innovation of digital protection and utilization, interpretation and dissemination of cultural heritage. By reviewing the relevant literature at home and abroad, this paper summarizes the characteristics of digital twin, compares the data lifecycle management, enabling technologies and application framework in architectural cultural heritage and cultural heritage metaverse, analyzes the connotation of cultural heritage digital twin and digital twin process, and puts forward the framework of cultural heritage digital twin, taking two representative heritage areas of Huai’an section of the Grand Canal as examples for application design, the necessary conditions and challenges are discussed.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality Based Manual Spraying Modeling and Simulation 基于虚拟现实的手工喷涂建模与仿真
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169479
Hanzhong Xu, Dianliang Wu, Wenjuan Yu, Yue Zhao, Qihang Yu, Kai Zou
{"title":"Virtual Reality Based Manual Spraying Modeling and Simulation","authors":"Hanzhong Xu, Dianliang Wu, Wenjuan Yu, Yue Zhao, Qihang Yu, Kai Zou","doi":"10.1109/ICVR57957.2023.10169479","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169479","url":null,"abstract":"In the process of ship painting, the distribution of coating thickness and its calculation is difficult due to the complexity of the coating formed by manual spraying. Therefore, a modeling and simulation method of manual spraying based on virtual reality (VR) is proposed. The manual spraying model (MSM) is established by the relationship between gun parameters, position, direction, and coating thickness. Then the MSM is verified by designing a spraying test. The results show that the simulation and test error is less than 8%, which proves the accuracy of MSM in this paper. Based on the MSM, a VR-based ship spraying simulation scene was developed using C# and Unity3D, and the error was less than 5% by comparing it with the coating thickness of the actual test. This further illustrates the reliability of the MSM in this paper.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123169328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-Metaverse Interaction for Anxiety Regulation 焦虑调节的脑-元交互作用
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169785
Nanlin Jin, Ye Wu, J. ParK, Zihui Qin, Hai-Ning Liang
{"title":"Brain-Metaverse Interaction for Anxiety Regulation","authors":"Nanlin Jin, Ye Wu, J. ParK, Zihui Qin, Hai-Ning Liang","doi":"10.1109/ICVR57957.2023.10169785","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169785","url":null,"abstract":"Metaverse has become a powerful tool for conducting research in many domains, including education, social science, and healthcare. It mixes the virtual and physical environments and can produce various stimuli for users to experience and be immersed in the virtual-real environment. However, at present, these stimuli are preset and immobile, not responding to the user’s changing requirements. In addition, it lacks studies on how brain signals might suggest the demand or preference for specific VR content and if/how VR can interact with users’ brains directly, hands-free, and without verbal instructions. As metaverse’s natural association with learning and brain activities, receiving signals directly from the user’s brain will offer a firm edge to explore mental health issues. This research proposes a new framework, namely Brain-Metaverse Interaction (BMI), which enables the direct interaction between users’ brain signals and the adaptation of VR content in an iterative and evolving manner. Our experiment based on this framework shows promising results, although suffering from the typical limitations of hardware devices and data acquisition, such as signal noise of EEG data and sensitivity and latency of the EEG device.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125036755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSC-GraspNet: A Lightweight Convolutional Neural Network for Robotic Grasp Detection DSC-GraspNet:一种用于机器人抓取检测的轻量级卷积神经网络
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169448
Zhiyang Zhou, Xiaoqiang Zhang, Lingyan Ran, Yamin Han, Hongyu Chu
{"title":"DSC-GraspNet: A Lightweight Convolutional Neural Network for Robotic Grasp Detection","authors":"Zhiyang Zhou, Xiaoqiang Zhang, Lingyan Ran, Yamin Han, Hongyu Chu","doi":"10.1109/ICVR57957.2023.10169448","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169448","url":null,"abstract":"Grasp detection is an essential task for robots to achieve autonomous operation, it can also make virtual reality-based teleoperation more intelligent and reliable. Existing learning-based grasp detection methods usually fail to strike a balance between high accuracy and low time consumption. Also, the large number of model parameters tends to make these methods expensive to deploy. To solve this problem, a lightweight generative grasp detection network DSC-GraspNet is proposed. Firstly, Depth-separable convolutional blocks with Coordinate Attention (CA) are stacked to obtain a lightweight backbone network for feature extraction. Then multi-level features extracted by the backbone network are fused by the Cross Stage Partial (CSP) block in the up-sampling network. Finally, pixel-level grasp candidates are generated by grasp generating heads. Experimental results shows that an accuracy of 98.3% under image-wise splitting and 97.7% under object-wise splitting can be achieved on the Cornell public dataset. Meanwhile, an accuracy of 94.7% is achieved on the Jacquard dataset using the depth map as inputs. Our method also achieve a grasp success rate of 86.4% in the simulated grasp test. In addition, our network is able to inference an RGB-D image within 14ms, and can be applied to closed-loop grasping scenarios.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123500569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信