IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
BoundaryScreen: Summoning the Home Screen in VR via Walking Outward.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549536
Yang Tian, Xingjia Hao, Jianchun Su, Wei Sun, Yangjian Pan, Yunhai Wang, Minghui Sun, Teng Han, Ningjiang Chen
{"title":"BoundaryScreen: Summoning the Home Screen in VR via Walking Outward.","authors":"Yang Tian, Xingjia Hao, Jianchun Su, Wei Sun, Yangjian Pan, Yunhai Wang, Minghui Sun, Teng Han, Ningjiang Chen","doi":"10.1109/TVCG.2025.3549536","DOIUrl":"10.1109/TVCG.2025.3549536","url":null,"abstract":"<p><p>A safety boundary wall in VR is a virtual barrier that defines a safe area, allowing users to navigate and interact without safety concerns. However, existing implementations neglect to utilize the safety boundary wall's large surface for displaying interactive information. In this work, we propose the BoundaryScreen technique based on the \"walking outward\" metaphor to add interactivity to the safety boundary wall. Specifically, we augment the safety boundary wall by placing the home screen on it. To summon the home screen, the user only needs to walk outward until it appears. Results showed that (i) participants significantly preferred BoundaryScreen in the outermost two-step-wide ring-shaped section of a circular safety area; and (ii) participants exhibited strong \"behavioral inertia\" for walking, i.e., after completing a routine activity involving constant walking, participants significantly preferred to use the walking-based BoundaryScreen technique to summon the home screen.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PwP: Permutating with Probability for Efficient Group Selection in VR. PwP:在虚拟现实中通过概率排列实现高效的分组选择。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549560
Jian Wu, Weicheng Zhang, Handong Chen, Wei Lin, Xuehuai Shi, Lili Wang
{"title":"PwP: Permutating with Probability for Efficient Group Selection in VR.","authors":"Jian Wu, Weicheng Zhang, Handong Chen, Wei Lin, Xuehuai Shi, Lili Wang","doi":"10.1109/TVCG.2025.3549560","DOIUrl":"10.1109/TVCG.2025.3549560","url":null,"abstract":"<p><p>Group selection in virtual reality is an important means of multi-object selection, which allows users to quickly group multiple objects and can significantly improve the operation efficiency of multiple types of objects. In this paper, we propose a group selection method based on multiple rounds of probability permutation, in which the efficiency of group selection is substantially improved by making the object layout of the next round easier to be batch-selected through interactive selection, object grouping probability computation, and position rearrangement in each round of the selection process. We conducted ablation experiments to determine the algorithm coefficients and validate the effectiveness of the algorithm. In addition, an empirical user study was conducted to evaluate the ability of our method to significantly improve the efficiency of the group selection task in an immersive virtual reality environment. The reduced operations also indirectly reduce the user task load and improve usability.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust in Virtual Agents: Exploring the Role of Stylization and Voice. 虚拟代理中的信任:探索风格化和声音的作用。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549566
Yang Gao, Yangbin Dai, Guangtao Zhang, Honglei Guo, Fariba Mostajeran, Binge Zheng, Tao Yu
{"title":"Trust in Virtual Agents: Exploring the Role of Stylization and Voice.","authors":"Yang Gao, Yangbin Dai, Guangtao Zhang, Honglei Guo, Fariba Mostajeran, Binge Zheng, Tao Yu","doi":"10.1109/TVCG.2025.3549566","DOIUrl":"10.1109/TVCG.2025.3549566","url":null,"abstract":"<p><p>With the continuous advancement of artificial intelligence technology, data-driven methods for reconstructing and animating virtual agents have achieved increasing levels of realism. However, there is limited research on how these novel data-driven methods, combined with voice cues, affect user perceptions. We use advanced data-driven methods to reconstruct stylized agents and combine them with synthesized voices to study their effects on users' trust and other perceptions (e.g. social presence and empathy). Through an experiment with 27 participants, our findings reveal that stylized virtual agents enhance user trust to a degree comparable to real style, while voice has a negligible effect on trust. Additionally, elder agents are more likely to be trusted. The style of the agents also plays a key role in participants' perceived realism, and audio-visual matching significantly enhances perceived empathy. These results provide new insights into designing trustworthy virtual agents and further support and validate the audio-visual integration theory.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editable Mesh Animations Modeling Based on Controlable Particles for Real-Time XR.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549573
Xiangyang Zhou, Yanrui Xu, Chao Yao, Xiaokun Wang, Xiaojuan Ban
{"title":"Editable Mesh Animations Modeling Based on Controlable Particles for Real-Time XR.","authors":"Xiangyang Zhou, Yanrui Xu, Chao Yao, Xiaokun Wang, Xiaojuan Ban","doi":"10.1109/TVCG.2025.3549573","DOIUrl":"10.1109/TVCG.2025.3549573","url":null,"abstract":"<p><p>The real-time generation of editable mesh animations in XR applications has been a focal point of research in the XR field. However, easily controlling the generated editable meshes remains a significant challenge. Existing methods often suffer from slow generation speeds and suboptimal results, failing to accurately simulate target objects' complex details and shapes, which does not meet user expectations. Additionally, the final generated meshes typically require manual user adjustments, and it is difficult to generate multiple target models simultaneously. To overcome these limitations, a universal control scheme for particles based on the sampling features of the target is proposed. It introduces a spatially adaptive control algorithm for particle coupling by adjusting the magnitude of control forces based on the spatial features of model sampling, thereby eliminating the need for parameter dependency and enabling the control of multiple types of models within the same scene. We further introduce boundary correction techniques to improve the precision in generating target shapes while reducing particle splashing. Moreover, a distance-adaptive particle fragmentation mechanism prevents unnecessary particle accumulation. Experimental results demonstrate that the method has better performance in controlling complex structures and generating multiple targets at the same time compared to existing methods. It enhances control accuracy for complex structures and targets under the condition of sparse model sampling. It also consistently delivers outstanding results while maintaining high stability and efficiency. Ultimately, we were able to create a set of smooth editable meshes and developed a solution for integrating this algorithm into VR and AR animation applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FovealNet: Advancing AI-Driven Gaze Tracking Solutions for Efficient Foveated Rendering in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549577
Wenxuan Liu, Budmonde Duinkharjav, Qi Sun, Sai Qian Zhang
{"title":"FovealNet: Advancing AI-Driven Gaze Tracking Solutions for Efficient Foveated Rendering in Virtual Reality.","authors":"Wenxuan Liu, Budmonde Duinkharjav, Qi Sun, Sai Qian Zhang","doi":"10.1109/TVCG.2025.3549577","DOIUrl":"10.1109/TVCG.2025.3549577","url":null,"abstract":"<p><p>Leveraging real-time eye tracking, foveated rendering optimizes hardware efficiency and enhances visual quality virtual reality (VR). This approach leverages eye-tracking techniques to determine where the user is looking, allowing the system to render high-resolution graphics only in the foveal region-the small area of the retina where visual acuity is highest, while the peripheral view is rendered at lower resolution. However, modern deep learning-based gaze-tracking solutions often exhibit a long-tail distribution of tracking errors, which can degrade user experience and reduce the benefits of foveated rendering by causing misalignment and decreased visual quality. This paper introduces FovealNet, an advanced AI-driven gaze tracking framework designed to optimize system performance by strategically enhancing gaze tracking accuracy. To further reduce the implementation cost of the gaze tracking algorithm, FovealNet employs an event-based cropping method that eliminates over 64.8% of irrelevant pixels from the input image. Additionally, it incorporates a simple yet effective token-pruning strategy that dynamically removes tokens on the fly without compromising tracking accuracy. Finally, to support different runtime rendering configurations, we propose a system performance-aware multi-resolution training strategy, allowing the gaze tracking DNN to adapt and optimize overall system performance more effectively. Evaluation results demonstrate that FovealNet achieves at least 1.42× speed up compared to previous methods and 13% increase in perceptual quality for foveated output. The code is available at https://github.com/wl3181/FovealNet.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Validity of An Endoscopist-patient Co-participative Virtual Reality Method (EPC-VR) in Pain Relief during Colonoscopy.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549874
Yulong Bian, Juan Liu, Yongjiu Lin, Weiying Liu, Yang Zhang, Tangjun Qu, Sheng Li, Zhaojie Pan, Wenming Liu, Wei Huang, Ying Shi
{"title":"Examining the Validity of An Endoscopist-patient Co-participative Virtual Reality Method (EPC-VR) in Pain Relief during Colonoscopy.","authors":"Yulong Bian, Juan Liu, Yongjiu Lin, Weiying Liu, Yang Zhang, Tangjun Qu, Sheng Li, Zhaojie Pan, Wenming Liu, Wei Huang, Ying Shi","doi":"10.1109/TVCG.2025.3549874","DOIUrl":"10.1109/TVCG.2025.3549874","url":null,"abstract":"<p><p>To relieve perceived pain in patients undergoing colonoscopy, we developed an endoscopist-patient co-participative VR tool (EPC-VR) based on A Neurocognitive Model of Attention to Pain. It allows the patient to play a VR game actively and supports the endoscopist in triggering a distraction mechanism to divert the patient's attention away from the medical procedure. We performed a comparative clinical study with 40 patients. Patients' perception of pain and affective responses were evaluated, and the results support the effectiveness of EPC-VR: active VR playing with endoscopists' participation can help relieve the perceived pain and scare of patients undergoing colonoscopy. Finally, 87.5% of patients opt to use the VR application in the next colonoscopy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Visual Saliency for Dynamic Point Clouds: Task-free vs. Task-dependent.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549863
Xuemei Zhou, Irene Viola, Silvia Rossi, Pablo Cesar
{"title":"Comparison of Visual Saliency for Dynamic Point Clouds: Task-free vs. Task-dependent.","authors":"Xuemei Zhou, Irene Viola, Silvia Rossi, Pablo Cesar","doi":"10.1109/TVCG.2025.3549863","DOIUrl":"10.1109/TVCG.2025.3549863","url":null,"abstract":"<p><p>This paper presents a Task-Free eye-tracking dataset for Dynamic Point Clouds (TF-DPC) aimed at investigating visual attention. The dataset is composed of eye gaze and head movements collected from 24 participants observing 19 scanned dynamic point clouds in a Virtual Reality (VR) environment with 6 degrees of freedom. We compare the visual saliency maps generated from this dataset with those from a prior task-dependent experiment (focused on quality assessment) to explore how high-level tasks influence human visual attention. To measure the similarity between these visual saliency maps, we apply the well-known Pearson correlation coefficient and an adapted version of the Earth Mover's Distance metric, which takes into account both spatial information and the degrees of saliency. Our experimental results provide both qualitative and quantitative insights, revealing significant differences in visual attention due to task influence. This work enhances our understanding of the visual attention for dynamic point cloud (specifically human figures) in VR from gaze and human movement trajectories, and highlights the impact of task-dependent factors, offering valuable guidance for advancing visual saliency models and improving VR perception.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Avatar Retargeting on Pointing and Conversational Communication.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549171
Simbarashe Nyatsanga, Doug Roble, Michael Neff
{"title":"The Impact of Avatar Retargeting on Pointing and Conversational Communication.","authors":"Simbarashe Nyatsanga, Doug Roble, Michael Neff","doi":"10.1109/TVCG.2025.3549171","DOIUrl":"10.1109/TVCG.2025.3549171","url":null,"abstract":"<p><p>One of the pleasures of interacting using avatars in VR is being able to play a character very different to yourself. As the scale of characters change relative to a user, there is a need to retarget user motions onto the character, generally maintaining either the user's pose or the position of their wrists and ankles. This retargeting can impact both the functional and social information conveyed by the avatar. Focused on 3rd-person (observed) avatars, this paper presents three studies on these varied aspects of communication. It establishes a baseline for near-field avatar pointing, showing an accuracy of about 5cm. This can be maintained using positional hand constraints, but increases if the user's pose is directly transferred to the character. It is possible to maintain this accuracy with a Semantic Inverse Kinematics formulation that brings the avatar closer to the user's actual pose, but compensates by adjusting the finger pointing direction. Similar results are shown for conveying spatial information, namely object size. The choice of pose or position based retargeting leads to a small change in the perception of avatar personality, indicating an impact on social communication. This effect was not observed in a task where the users' cognitive load was otherwise high, so may be task dependent. It could also become more pronounced for more extreme proportion changes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Consumer Insights through VR Metaphor Elicitation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549905
Sai Priya Jyothula, Andrew E Johnson
{"title":"Enhancing Consumer Insights through VR Metaphor Elicitation.","authors":"Sai Priya Jyothula, Andrew E Johnson","doi":"10.1109/TVCG.2025.3549905","DOIUrl":"10.1109/TVCG.2025.3549905","url":null,"abstract":"<p><p>In consumer research, understanding consumer behavior and experiences is vital for making informed decisions about product design, innovation and marketing. Zaltman's Metaphor Elicitation Technique (ZMET) leverages metaphors and non-verbal communication to uncover and gain deeper insights into consumers' thoughts and emotions. This paper introduces a novel system that enables consumer researchers (interviewers) to perform a modified version of metaphor elicitation interviews in virtual reality (VR). Consumers (participants) use 3D objects in the virtual environment to express their thoughts and emotions, instead of pictures conventionally used in a ZMET interview. The system features an asymmetric setup where the participant is immersed in VR using a head-mounted display (HMD), while the interviewer views the participant's perspective on a monitor. We discuss the technical and design aspects of the VR system and present the results of a user study (N = 17) that we conducted to validate the effectiveness of performing the metaphor elicitation interviews in VR. This work also explores the experiences of both participants and interviewers during the interview sessions, aiming to identify potential improvements. The qualitative and quantitative analysis of the data demonstrated how immersion, presence and embodied interaction positively affect and aid in sense-making and deeper expression of the participants' thoughts, perspectives and emotions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Mute and Block: Adoption and Effectiveness of Safety Tools in Social VR, from Ubiquitous Harassment to Social Sculpting. 超越静音和屏蔽:社交 VR 中安全工具的采用和有效性,从无处不在的骚扰到社交雕塑。
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549860
Maheshya Weerasinghe, Shaun Macdonald, Cristina Fiani, Joseph O'Hagan, Mathieu Chollet, Mark McGill, Mohamed Khamis
{"title":"Beyond Mute and Block: Adoption and Effectiveness of Safety Tools in Social VR, from Ubiquitous Harassment to Social Sculpting.","authors":"Maheshya Weerasinghe, Shaun Macdonald, Cristina Fiani, Joseph O'Hagan, Mathieu Chollet, Mark McGill, Mohamed Khamis","doi":"10.1109/TVCG.2025.3549860","DOIUrl":"10.1109/TVCG.2025.3549860","url":null,"abstract":"<p><p>Harassment in Social Virtual Reality (SVR) is a growing concern. The current SVR landscape features inconsistent access to non-standardised safety features, with minimal empirical evidence on their real-world effectiveness, usage and impact. We examine the use and effectiveness of safety tools across 12 popular SVR platforms by surveying 100 users about their experiences of different types of harassment and their use of features like muting, blocking, personal spaces and safety gestures. While harassment remained common-including hate speech, virtual stalking, and physical harassment-many find safety features insufficient or inconsistently applied. Reactive tools like muting and blocking are widely used, largely driven by users' familiarity from other platforms. Safety tools are also used to proactively curate individual virtual experiences, protecting users from harassment, but inadvertently leading to fragmented social spaces. We advocate for standardising proactive, rather than reactive, anti-harassment tools across platforms, and present insights into future safety feature development.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信