IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549895
Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega
{"title":"Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.","authors":"Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega","doi":"10.1109/TVCG.2025.3549895","DOIUrl":"10.1109/TVCG.2025.3549895","url":null,"abstract":"<p><p>In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create \"impossible spaces\". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549576
Runze Fan, Jian Wu, Xuehuai Shi, Lizhi Zhao, Qixiang Ma, Lili Wang
{"title":"Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes.","authors":"Runze Fan, Jian Wu, Xuehuai Shi, Lizhi Zhao, Qixiang Ma, Lili Wang","doi":"10.1109/TVCG.2025.3549576","DOIUrl":"10.1109/TVCG.2025.3549576","url":null,"abstract":"<p><p>Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33× speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
360° 3D Photos from a Single 360° Input Image.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549538
Manuel Rey-Area, Christian Richardt
{"title":"360° 3D Photos from a Single 360° Input Image.","authors":"Manuel Rey-Area, Christian Richardt","doi":"10.1109/TVCG.2025.3549538","DOIUrl":"10.1109/TVCG.2025.3549538","url":null,"abstract":"<p><p>360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3552134
Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu
{"title":"SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.","authors":"Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu","doi":"10.1109/TVCG.2025.3552134","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552134","url":null,"abstract":"<p><p>Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549123
Loen Boban, Ronan Boulic, Bruno Herbelin
{"title":"An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.","authors":"Loen Boban, Ronan Boulic, Bruno Herbelin","doi":"10.1109/TVCG.2025.3549123","DOIUrl":"10.1109/TVCG.2025.3549123","url":null,"abstract":"<p><p>The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549908
Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha
{"title":"Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.","authors":"Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha","doi":"10.1109/TVCG.2025.3549908","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549908","url":null,"abstract":"<p><p>We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3552017
Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu
{"title":"Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.","authors":"Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu","doi":"10.1109/TVCG.2025.3552017","DOIUrl":"10.1109/TVCG.2025.3552017","url":null,"abstract":"<p><p>This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Empathy for Visual Impairments: A Multi-modal Approach in VR Serious Games.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549900
Yuexi Dong, Haonan Guo, Jingya Li
{"title":"Enhancing Empathy for Visual Impairments: A Multi-modal Approach in VR Serious Games.","authors":"Yuexi Dong, Haonan Guo, Jingya Li","doi":"10.1109/TVCG.2025.3549900","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549900","url":null,"abstract":"<p><p>Visual impairments significantly impact individuals' ability to perceive their surroundings, affecting everyday tasks and spatial navigation. This study explores SEEK VR,s a multi-modal virtual reality game designed to foster empathy and raise awareness about the challenges faced by visually impaired individuals. By integrating visual feedback, 3D spatial audio, and haptic feedback, the game provides an immersive experience that helps participants understand the physical and emotional struggles of visual impairment. The paper includes a review of related work on empathy-driven VR games, a detailed description of the design and implementation of SEEK VR, and the technical aspects of its multimodal interactions. A user study with 24 participants demonstrated significant increases in empathy, particularly in empathy and willingness to help visually impaired individuals in realworld scenarios. These findings highlight the potential of VR serious games to promote social awareness and empathy through immersive, multi-modal interactions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549859
Yuchen Deng, Haibin Ling, Bingyao Huang
{"title":"LAPIG: Language Guided Projector Image Generation with Surface Adaptation and Stylization.","authors":"Yuchen Deng, Haibin Ling, Bingyao Huang","doi":"10.1109/TVCG.2025.3549859","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549859","url":null,"abstract":"<p><p>We propose LAPIG, a language guided projector image generation method with surface adaptation and stylization. LAPIG consists of a projector-camera system and a target textured projection surface. LAPIG takes the user text prompt as input and aims to transform the surface style using the projector. LAPIG's key challenge is that due to the projector's physical brightness limitation and the surface texture, the viewer's perceived projection may suffer from color saturation and artifacts in both dark and bright regions, such that even with the state-of-the-art projector compensation techniques, the viewer may see clear surface texture-related artifacts. Therefore, how to generate a projector image that follows the user's instruction while also displaying minimum surface artifacts is an open problem. To address this issue, we propose projection surface adaptation (PSA) that can generate compensable surface stylization. We first train two networks to simulate the projector compensation and project-and-capture processes, this allows us to find a satisfactory projector image without real project-and-capture and utilize gradient descent for fast convergence. Then, we design content and saturation losses to guide the projector image generation, such that the generated image shows no clearly perceivable artifacts when projected. Finally, the generated image is projected for visually pleasing surface style morphing effects. The source code and more results are available on the project page: https://Yu-chen-Deng.github.io/LAPIG/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TraVIS: A User Trace Analyzer to Support User-Centered Design of Visual Analytics Solutions.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3546863
Matteo Filosa, Alexandra Plexousaki, Matteo Di Stadio, Francesco Bovi, Dario Benvenuti, Tiziana Catarci, Marco Angelini
{"title":"TraVIS: A User Trace Analyzer to Support User-Centered Design of Visual Analytics Solutions.","authors":"Matteo Filosa, Alexandra Plexousaki, Matteo Di Stadio, Francesco Bovi, Dario Benvenuti, Tiziana Catarci, Marco Angelini","doi":"10.1109/TVCG.2025.3546863","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3546863","url":null,"abstract":"<p><p>Visual Analytics (VA) has become a paramount discipline in supporting data analysis in many scientific domains, empowering the human user with automatic capabilities while keeping the lead in the analysis. At the same time, designing an effective VA solution is not a simple task, requiring its adaptation to the problem at hand and the intended user of the system. In this scenario, the User-Centered Design (UCD) methodology provides the framework to incorporate user needs into the design of a VA solution. On the other hand, its implementation mainly relies on qualitative feedback, with the designer missing tools supporting her in quantitatively reporting the user feedback and using it to hypothesize and test the successive changes to the VA solution. To overcome this limitation, we propose TraVIS, a Visual Analytics solution allowing the loading of a web-based VA system, collecting user traces, and analyzing them with respect to the system at hand. In this process, the designer can leverage the collected traces and relate them to the tasks the VA solution supports and how those can be achieved. Using TraVIS, the designer can identify ineffective interaction paths, analyze the user traces support to task completion, hypothesize corrections to the design, and evaluate the effect of changes. We evaluated TraVIS through experimentation with 11 VA systems from literature, a use case, and user evaluation with five experts. Results show the benefits that TraVIS provides in terms of identifying design problems and efficient support for UCD. TraVIS is available at: https://github.com/XAIber-lab/TraVIS.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信