2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

筛选
英文 中文
Designing Viewpoint Transition Techniques in Multiscale Virtual Environments 多尺度虚拟环境中的视点转换技术设计
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00083
Jong-In Lee, P. Asente, W. Stuerzlinger
{"title":"Designing Viewpoint Transition Techniques in Multiscale Virtual Environments","authors":"Jong-In Lee, P. Asente, W. Stuerzlinger","doi":"10.1109/VR55154.2023.00083","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00083","url":null,"abstract":"Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when they are navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend this work with an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE. We show that some types of viewpoint transitions enhance users' spatial awareness and confidence in their spatial orientation and reduce the need to revisit a target point of interest multiple times.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132323084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style-aware Augmented Virtuality Embeddings (SAVE) 风格感知增强虚拟嵌入(SAVE)
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00032
John L. Hoster, Dennis Ritter, Kristian Hildebrand
{"title":"Style-aware Augmented Virtuality Embeddings (SAVE)","authors":"John L. Hoster, Dennis Ritter, Kristian Hildebrand","doi":"10.1109/VR55154.2023.00032","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00032","url":null,"abstract":"We present an augmented virtuality (AV) pipeline that enables the user to interact with real-world objects through stylised representations which match the VR scene and thereby preserve immersion. It consists of three stages: First, the object of interest is reconstructed from images and corresponding camera poses recorded with the VR headset, or alternatively a retrieval model finds a fitting mesh from the ShapeNet dataset. Second, a style transfer technique adapts the mesh to the VR game scene in order to preserve consistent immersion. Third, the stylised mesh is superimposed on the real object in real time to ensure interactivity even if the real object is moved. Our pipeline serves as proof of concept for style-aware AV embeddings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality in Supporting Charitable Giving: The Role of Vicarious Experience, Existential Guilt, and Need for Stimulation 虚拟现实在支持慈善捐赠中的作用:代理经验、存在的罪恶感和刺激的需要
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00079
Ou Li, Han Qiu
{"title":"Virtual Reality in Supporting Charitable Giving: The Role of Vicarious Experience, Existential Guilt, and Need for Stimulation","authors":"Ou Li, Han Qiu","doi":"10.1109/VR55154.2023.00079","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00079","url":null,"abstract":"Although a growing number of charities have used virtual reality (VR) technology for fundraising activities, with better results than ever before, little research has been undertaken on what factors make VR beneficial in supporting charitable giving. The primary goal of this study is to investigate the underlying mechanism of VR in supporting charitable giving, which extends the current literature on VR and donation behaviors. The findings of this study indicated that VR charitable appeals increase actual money donations when compared to the traditional two-dimensional (2D) format and that this effect is achieved through a serial mediating effect of vicarious experience and existential guilt. Findings also identify the need for stimulation as a boundary condition, indicating that those with a higher (vs. lower) need for stimulation were more (vs. less) affected by the mediating mechanism of VR charitable appeals on donations. This work contributes to our understanding of the relationship between VR technology and charitable giving, as well as to future research on VR and its prosocial applications.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128557465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delta Path Tracing for Real-Time Global Illumination in Mixed Reality 混合现实中实时全局照明的Delta路径跟踪
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00020
Yang Xu, Yu Jiang, Shibo Wang, Kang Li, Guohua Geng
{"title":"Delta Path Tracing for Real-Time Global Illumination in Mixed Reality","authors":"Yang Xu, Yu Jiang, Shibo Wang, Kang Li, Guohua Geng","doi":"10.1109/VR55154.2023.00020","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00020","url":null,"abstract":"Visual coherence between real and virtual objects is important in mixed reality (MR), and illumination consistency is one of the key aspects to achieve coherence. Apart from matching the illumination of the virtual objects with the real environments, the change of illumination on the real scenes produced by the inserted virtual objects should also be considered but is difficult to compute in real-time due to the heavy computation demands of global illumination. In this work, we propose delta path tracing (DPT), which only computes the radiance blocked by the virtual objects from the light sources at the primary hit points of Monte Carlo path tracing, then combines the blocked radiance and multi-bounce indirect illumination with the image of the real scene. Multiple importance sampling (MIS) between BRDF and environment map is performed to handle all-frequency environment maps captured by a panorama camera. Compared to conventional differential rendering methods, our method can remarkably reduce the number of times required to access the environment map and avoid rendering scenes twice. Therefore, the performance can be significantly improved. We implement our method using hardware-accelerated ray tracing on modern GPUs, and the results demonstrate that our method can render global illumination at real-time frame rates and produce plausible visual coherence between real and virtual objects in MR environments.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121612349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality 研究监护人意识技术促进虚拟现实中的安全
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00078
Sixuan Wu, Jiannan Li, Maurício Sousa, Tovi Grossman
{"title":"Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality","authors":"Sixuan Wu, Jiannan Li, Maurício Sousa, Tovi Grossman","doi":"10.1109/VR55154.2023.00078","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00078","url":null,"abstract":"Virtual Reality (VR) can completely immerse users in a virtual world and provide little awareness of bystanders in the surrounding physical environment. Current technologies use predefined guardian area visualizations to set safety boundaries for VR interactions. However, bystanders cannot perceive these boundaries and may collide with VR users if they accidentally enter guardian areas. In this paper, we investigate four awareness techniques on mobile phones and smartwatches to help bystanders avoid invading guardian areas. These techniques include augmented reality boundary overlays and visual, auditory, and haptic alerts indicating bystanders' distance from guardians. Our findings suggest that the proposed techniques effectively keep participants clear of the safety boundaries. More specifically, using augmented reality overlays, participants could avoid guardians with less time, and haptic alerts caused less distraction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"26 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114030910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iARVis: Mobile AR Based Declarative Information Visualization Authoring, Exploring and Sharing iARVis:基于移动增强现实的声明性信息可视化创作、探索和共享
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00017
Junjie Chen, Chenhui Li, Sicheng Song, Changbo Wang
{"title":"iARVis: Mobile AR Based Declarative Information Visualization Authoring, Exploring and Sharing","authors":"Junjie Chen, Chenhui Li, Sicheng Song, Changbo Wang","doi":"10.1109/VR55154.2023.00017","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00017","url":null,"abstract":"We present iARVis, a proof-of-concept toolkit for creating, experiencing, and sharing mobile AR-based information visualization environments. Over the past years, AR has emerged as a promising medium for information and data visualization beyond the physical media and the desktop, enabling interactivity and eliminating spatial limits. However, the creation of such environments remains difficult and frequently necessitates low-level programming expertise and lengthy hand encodings. We present a declarative approach for defining the augmented reality (AR) environment, including how information is automatically positioned, laid out, and interacted with, to improve the efficiency and flexibility of constructing AR-based information visualization environments. We provide fundamental layout and visual components such as the grid, rich text, images, and charts for the development of complex visualization widgets, as well as automatic targeting methods based on image and object tracking for the development of the AR environment. To increase design efficiency, we also provide features such as hot-reload and several creation levels for both novice and advanced users. We also investigate how the augmented reality-based visualization environment could persist and be shared through the internet and provide ways for storing, sharing, and restoring the environment to give a continuous and seamless experience. To demonstrate the viability and extensibility, we evaluate iARVis using a variety of use cases along with performance evaluation and expert reviews.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121836985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch RemoteTouch:通过手部触摸增强身临其境的3D视频通信
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-02-28 DOI: 10.1109/VR55154.2023.00016
Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, B. Guo
{"title":"RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch","authors":"Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, B. Guo","doi":"10.1109/VR55154.2023.00016","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00016","url":null,"abstract":"Recent research advance has significantly improved the visual real-ism of immersive 3D video communication. In this work we present a method to further enhance this immersive experience by adding the hand touch capability (“remote hand clapping”). In our system, each meeting participant sits in front of a large screen with haptic feedback. The local participant can reach his hand out to the screen and perform hand clapping with the remote participant as if the two participants were only separated by a virtual glass. A key challenge in emulating the remote hand touch is the realistic rendering of the participant's hand and arm as the hand touches the screen. When the hand is very close to the screen, the RGBD data required for realistic rendering is no longer available. To tackle this challenge, we present a dual representation of the user's hand. Our dual representation not only preserves the high-quality rendering usually found in recent image-based rendering systems but also allows the hand to reach to the screen. This is possible because the dual representation includes both an image-based model and a 3D geometry-based model, with the latter driven by a hand skeleton tracked by a side view camera. In addition, the dual representation provides a distance-based fusion of the image-based and 3D geometry-based models as the hand moves closer to the screen. The result is that the image-based and 3D geometry-based models mutually enhance each other, leading to realistic and seamless rendering. Our experiments demonstrate that our method provides consistent hand contact experience between remote users and improves the immersive experience of 3D video communication.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115629601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display 仅使用头戴式显示器实时识别原位身体动作和头部手势
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-02-25 DOI: 10.1109/VR55154.2023.00026
Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu
{"title":"Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display","authors":"Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu","doi":"10.1109/VR55154.2023.00026","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00026","url":null,"abstract":"Body actions and head gestures are natural interfaces for interaction in virtual environments. Existing methods for in-place body action recognition often require hardware more than a head-mounted display (HMD), making body action interfaces difficult to be introduced to ordinary virtual reality (VR) users as they usually only possess an HMD. In addition, there lacks a unified solution to recognize in-place body actions and head gestures. This potentially hinders the exploration of the use of in-place body actions and head gestures for novel interaction experiences in virtual environments. We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. Compared to previous approaches, our method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. In total, ten in-place body actions and eight head gestures can be recognized with the proposed method, which makes this method a readily available body action interface (head gestures included) for interaction with virtual environments. We demonstrate one utility of the interface through a virtual locomotion task. Results show that the present body action interface is reliable in detecting body actions for the VR locomotion task but is physically demanding compared to a touch controller interface. The present body action interface is promising for new VR experiences and applications, especially for VR fitness applications where workouts are intended.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An EEG-based Experiment on VR Sickness and Postural Instability While Walking in Virtual Environments 基于脑电图的虚拟环境中行走时VR疾病和姿势不稳定实验
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-02-22 DOI: 10.1109/VR55154.2023.00025
C. A. T. Cortes, Chin-Teng Lin, Tien-Thong Nguyen Do, Hsiang-Ting Chen
{"title":"An EEG-based Experiment on VR Sickness and Postural Instability While Walking in Virtual Environments","authors":"C. A. T. Cortes, Chin-Teng Lin, Tien-Thong Nguyen Do, Hsiang-Ting Chen","doi":"10.1109/VR55154.2023.00025","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00025","url":null,"abstract":"Previous studies showed that natural walking reduces the susceptibility to VR sickness. However, many users still experience VR sickness when wearing VR headsets that allow free walking in room-scale spaces. This paper studies VR sickness and postural instability while the user walks in an immersive virtual environment using an electroencephalogram (EEG) headset and a full-body motion capture system. The experiment induced VR sickness by gradually increasing the translation gain beyond the user's detection threshold. A between-group comparison between participants with and without VR sickness symptoms found some significant differences in postural stability but found none on gait patterns during the walking. In the EEG analysis, the group with VR sickness showed a reduction of alpha power, a phenomenon previously linked to a higher workload and efforts to maintain postural control. In contrast, the group without VR sickness exhibited brain activities linked to fine cognitive-motor control. The EEG result provides new insights into the postural instability theory: participants with VR sickness could maintain their postural stability at the cost of a higher cognitive workload. Our result also indicates that the analysis of lower-frequency power could complement behavioral data for continuous VR sickness detection in both stationary and mobile VR setups.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114466004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration MAGIC:操纵虚拟形象和手势以改善远程协作
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-02-15 DOI: 10.1109/VR55154.2023.00059
Catarina G. Fidalgo, Maurício Sousa, Daniel Mendes, R. K. D. Anjos, Daniel Medeiros, K. Singh, Joaquim Jorge
{"title":"MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration","authors":"Catarina G. Fidalgo, Maurício Sousa, Daniel Mendes, R. K. D. Anjos, Daniel Medeiros, K. Singh, Joaquim Jorge","doi":"10.1109/VR55154.2023.00059","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00059","url":null,"abstract":"Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are im-mersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as cus-tomized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信