2023 9th International Conference on Virtual Reality (ICVR)最新文献

筛选
英文 中文
Contemporary Value Research and Digital Protection Practice of Traditional Wooden Boats in Hongze Lake 洪泽湖传统木船的当代价值研究与数字化保护实践
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169511
Deng Bangkun, Liu Zhaoxiang, Seorin Kyeong, Wu Qitao, Yin Guojun, Li Xin
{"title":"Contemporary Value Research and Digital Protection Practice of Traditional Wooden Boats in Hongze Lake","authors":"Deng Bangkun, Liu Zhaoxiang, Seorin Kyeong, Wu Qitao, Yin Guojun, Li Xin","doi":"10.1109/ICVR57957.2023.10169511","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169511","url":null,"abstract":"The water area of Hongze Lake has obvious characteristics and rich aquatic resources. Since ancient times, local traditional wooden boats have been closely related to the daily life of residents and fishing, hunting and labor. In order to protect and inherit the traditional wooden boats in Hongze Lake, and to further explore its contemporary value, the existing documents are sorted out from the perspective of historical culture, interviews are conducted with the inheritors of intangible cultural heritage, and the local folk culture is interpreted to provide reference for the development of ships and handicrafts in Hongze Lake area; Record the wooden ship manufacturing technology from the perspective of process technology, sort out relevant materials, dimensions, structures and manufacturing technology, and provide some basic principles and experience guidance for the modern shipbuilding industry; From the perspective of practical functions, this paper analyzes the scientificity and rationality of the function and modeling coordination of various ship types, analyzes their design value, and helps to understand the creation ideas of local artisans. On the basis of this research, digital modeling and recording will be carried out to achieve the early filing and display, lay the foundation for the later dissemination and innovation, and promote the protection and development of traditional wooden boats in Hongze Lake.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122658830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dense Point Cloud Reconstruction Based on a Single Image 基于单幅图像的密集点云重建
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169465
Hanxing Li, Meili Wang
{"title":"Dense Point Cloud Reconstruction Based on a Single Image","authors":"Hanxing Li, Meili Wang","doi":"10.1109/ICVR57957.2023.10169465","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169465","url":null,"abstract":"How to recover high-resolution 3D point clouds from a single image is of great importance in the field of computer vision. However, due to the limited information contained within a single image, reconstructing a dense point cloud of objects from a single image is a highly challenging task. In this paper, we construct a multi-stage dense point cloud reconstruction network that incorporates a coordinate attention mechanism and a point cloud folding operation. The proposed network model comprises two parts: an image-based sparse point cloud generation network and a folding-based dense point cloud generation network. Firstly, we generate a sparse point cloud by extracting the features of the target object from a single image using an image-based sparse point cloud generation network. Then we use the folding-based dense point cloud generation network to density the generated sparse point cloud. Finally, the two stages are combined by deep learning fine-tuning techniques to form an end-to-end dense point cloud reconstruction network that generates a dense point cloud from a single image. By evaluating the synthetic datasets, the proposed method effectively reconstructs the dense point cloud model of the corresponding object and outperforms existing methods in terms of metrics. Meanwhile, our method also performs well on real-world datasets.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127640925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bionic Robots as a New Alternative to Guided Dogs 仿生机器人作为导盲犬的新选择
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169301
Yilin Cao, Nanlin Jin, Yihong Wang, Chengtao Ji, Yushan Pan
{"title":"Bionic Robots as a New Alternative to Guided Dogs","authors":"Yilin Cao, Nanlin Jin, Yihong Wang, Chengtao Ji, Yushan Pan","doi":"10.1109/ICVR57957.2023.10169301","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169301","url":null,"abstract":"As the robot industry grows, research into biomimetic robots continues to increase. Robot dogs are one of the more researched types. Unlike robotic arms and vehicles, robotic dogs emphasize interaction with people, and therefore their applications are more focused on daily life. Machine guide dogs are one application that makes good use of this feature. This paper describes the use of the robot dog DOGZILLA S1 for route patrol as well as obstacle recognition. Based on this, the robot dog will provide feedback to people, which can be used as pre-research for designing it into a complete guide dog.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"728 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114240016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Scene Understanding and Positioning System from RGB Images for Tele-meeting Application in Augmented Reality 基于RGB图像的增强现实远程会议场景理解与定位系统
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169590
Bo-Hui Wang, Febrina Wijaya, Robin Fischer, Ya-Hui Tang, Shiann-Jang Wang, Wei-En Hsu, L. Fu
{"title":"A Scene Understanding and Positioning System from RGB Images for Tele-meeting Application in Augmented Reality","authors":"Bo-Hui Wang, Febrina Wijaya, Robin Fischer, Ya-Hui Tang, Shiann-Jang Wang, Wei-En Hsu, L. Fu","doi":"10.1109/ICVR57957.2023.10169590","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169590","url":null,"abstract":"As augmented reality (AR) technology advances, there is a growing demand to apply it to various applications. With the outbreak of the COVID-19 epidemic in 2020, online meetings have become increasingly common to prevent human contact, creating an opportunity to implement AR technology and its related applications. In this paper, we propose a new AR solution for tele-meeting applications that combines neural networks and simultaneous localization and mapping (SLAM) to achieve scene understanding and user localization using only RGB images. Our system offers the flexibility to develop the target application solely based on our custom devices/software, rather than relying on existing AR software development kits (SDKs) with their limitations. Existing SDKs, such as ARCore, can only be used on officially certified devices by Google, and developing custom AR kits to resolve compatibility issues among multiple technologies and devices can be challenging and time-consuming. This work presents a new system to address the challenges of scene understanding and user positioning for realizing tele-meeting applications, which can also be used as an AR development kit on any device with a camera. We conducted several experiments on the system modules to verify its performance, and the results show that our system provides an efficient and stable user experience, making it a promising technology and application.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Design Space for Hands-Free Robot Dog Interaction via Augmented Reality 利用增强现实技术探索无手机器狗交互的设计空间
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169556
Ziming Li, Zihui Qin, Yiming Luo, Yushan Pan, Hai-Ning Liang
{"title":"Exploring the Design Space for Hands-Free Robot Dog Interaction via Augmented Reality","authors":"Ziming Li, Zihui Qin, Yiming Luo, Yushan Pan, Hai-Ning Liang","doi":"10.1109/ICVR57957.2023.10169556","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169556","url":null,"abstract":"With the advancement of robotics-related technologies, there is a growing potential for robot dogs to replace real dogs in helping humans at work, in dangerous situations, or as companions. However, at present, robot dogs are mainly used in the engineering sector, for the companionship of older people and children, and to assist disabled people. However, their interaction is very limited. This prevents many user groups from benefiting from having a robot dog. Advances in Augmented Reality (AR) technology offer more possible methods for controlling robot dogs. This work explores the elements that could be designed for user-robot dog interaction via an AR interface. Furthermore, to consider the situation where users are occupied with another task or unable to use their hands when they are with a robot dog (e.g., jogging or walking with their hands holding items), we focus on hands-free interaction. To accomplish this exploration, we deconstruct the control process and develop a design space for hands-free interaction via AR. This space can help designers identify and use all potential options when designing AR interfaces for effective human-robot dog interaction. We also present a demonstration to illustrate how to use our design space.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Selection and Search Usability Across Desktop, Tablet, and Head-Mounted Display WebXR Platforms 探索桌面、平板电脑和头戴式显示器WebXR平台上的选择和搜索可用性
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169549
Anthony Scavarelli, Robert J. Teather, A. Arya
{"title":"Exploring Selection and Search Usability Across Desktop, Tablet, and Head-Mounted Display WebXR Platforms","authors":"Anthony Scavarelli, Robert J. Teather, A. Arya","doi":"10.1109/ICVR57957.2023.10169549","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169549","url":null,"abstract":"We present a comparative evaluation of a VR learning application on desktop, head-mounted displays, and tablet platforms. We first evaluated fundamental interaction, including selection and search, and general usability across these platforms using Circles, our custom-built WebXR application. We developed two virtual environments for the study: (1) a selection and search testbed, and (2) a virtual learning environment developed for use within a post-secondary gender diversity workshop. Performance and general usability results were consistent with past studies, suggesting that WebXR offers adequate performance to support learning applications. However, designing a compelling user experience in VR remains challenging, although web-based VR offers accessibility benefits due to its multi-platform design. Finally, as this study was conducted remotely during the COVID-19 pandemic, we also reflect on how our system and study accommodate remote participation, similar to a traditionally lab-based experience.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133424216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiBrush: 3D Brush Painting Using Multiple Viewpoints in Virtual Reality MultiBrush:在虚拟现实中使用多视点的3D笔刷绘画
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169798
Mikiya Kusunoki, Ryo Furuhama, Ryusuke Toshima, Hazuki Mori, Haoran Xie, Tzu-Yang Wang, T. Yuizono, Toshiki Sato, K. Miyata
{"title":"MultiBrush: 3D Brush Painting Using Multiple Viewpoints in Virtual Reality","authors":"Mikiya Kusunoki, Ryo Furuhama, Ryusuke Toshima, Hazuki Mori, Haoran Xie, Tzu-Yang Wang, T. Yuizono, Toshiki Sato, K. Miyata","doi":"10.1109/ICVR57957.2023.10169798","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169798","url":null,"abstract":"Recently, 3D modeling using the brush tool in virtual reality (VR) has been successfully explored. However, it is still difficult and time-consuming for the user to paint the designed target while seeing around the object and adjusting the object scales multiple times by grasping the entire object. To reduce the burden on users of 3D painting brushes in the VR space, we propose MultiBrush, a design approach that provides additional windows in multiple viewpoints of the target workspace. The proposed approach adopts the viewpoint opposite the object to be designed, and the bird’s-eye viewpoint captures the entire object to the normal viewpoint and displays images by superimposing the images from other viewpoints. To verify the proposed system, we conducted evaluation experiments for system effectiveness and questionnaires to obtain the subjective feedback.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR-Assisted Healing: VR CONTENT Creation Cuts through the Psychological Healing Process VR辅助治疗:VR内容创作贯穿心理治疗过程
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169380
Aokun Yu, Cheng Zhang, JinNi Huang, YuTian Yi
{"title":"VR-Assisted Healing: VR CONTENT Creation Cuts through the Psychological Healing Process","authors":"Aokun Yu, Cheng Zhang, JinNi Huang, YuTian Yi","doi":"10.1109/ICVR57957.2023.10169380","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169380","url":null,"abstract":"VR has attracted attention in the field of psychotherapy for its unique immersive embodiment effect. The effectiveness of virtual healing technologies for broad psychological influence has been supported by a number of experiments. However, the development of VR in the field of psychotherapy is constrained by the barrier between psychology and VR content creation. This paper starts with the process of psychological counseling. concentrating on the function of VR as a support for psychotherapy and exploring the entry point of VR content development in treatment.The diversity and professionalism of VR psychological scenes determine the creation of VR healing psychological scenes, which have a greater potential for development in both daily mental relief and professional psychotherapy. On this basis, a systematic resource base for VR psychological scene production and AI algorithms will be the key to promoting VR psychological healing creation.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Zero-Shot Human-Object Interaction with Visual-Text Modeling 用视觉文本建模检测零射击人-物交互
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169554
Haozhong Wang, Hua Yu, Qiang Zhang
{"title":"Detecting Zero-Shot Human-Object Interaction with Visual-Text Modeling","authors":"Haozhong Wang, Hua Yu, Qiang Zhang","doi":"10.1109/ICVR57957.2023.10169554","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169554","url":null,"abstract":"Most existing Human-Object Interaction (HOI) detection methods focus on supervised learning, but labeling all interactions is costly because of the enormous possible combinations of objects and verbs. Zero-shot HOI detection emerges as a promising approach to address this problem but encounters challenges when facing unseen interactions. To this end, we propose a novel two-stage Visual-Text modeling HOI detection (VT-HOI) method which can effectively recognize both seen and unseen interactions. In the first stage, the features of the humans and the objects are extracted by DETR and concatenated as the query sequences. In the second stage, local and global memory features from the Visual Encoder are fused into the corresponding query sequences by our proposed Semantic Representation Decoder with the cross-attention mechanism. Then we perform cosine similarity computation between visual features and text features, which are extracted or label-generated by Visual Representation Head (VRH) and Text Feature Memory (TFM) module respectively. Finally, the similarity matrix is fused with the results of the classification head for training or inference. The comprehensive experiments conducted on HICO-DET datasets demonstrate that the proposed VT-HOI significantly outperforms the state-of-the-art methods.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130719453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge Crane Co-simulation Based on Solidworks/Adams/Matlab 基于Solidworks/Adams/Matlab的桥式起重机联合仿真
2023 9th International Conference on Virtual Reality (ICVR) Pub Date : 2023-05-12 DOI: 10.1109/ICVR57957.2023.10169531
Xiuxian Yang, Yuchi Cao, Tie-shan Li, Qihe Shan
{"title":"Bridge Crane Co-simulation Based on Solidworks/Adams/Matlab","authors":"Xiuxian Yang, Yuchi Cao, Tie-shan Li, Qihe Shan","doi":"10.1109/ICVR57957.2023.10169531","DOIUrl":"https://doi.org/10.1109/ICVR57957.2023.10169531","url":null,"abstract":"The bridge crane body is modeled by using Solidworks software, the hoisting rope is simultaneously modeled based on Solidworks and Adams, the bridge crane dynamics is then analyzed by resorting to Adams, and the bridge crane controller is further designed by Matlab. The co-simulation of Solidworks/Adams/Matlab is finally implemented, and the co-simulation results are compared with Simulink simulation results, which verifies that Solidworks/Adams/Matlab co-simulation is feasible. This research method of co-simulation has greatly shortened the research and development cycle and reduced the research and development cost, which plays an essential role in the field of crane research and development.","PeriodicalId":439483,"journal":{"name":"2023 9th International Conference on Virtual Reality (ICVR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131107457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信