IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Make PBR Materials Tileable with Latent Diffusion Inpainting. 使PBR材料平铺与潜在扩散油漆。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3566315
Xiaoyu Zhan, Jianxin Yang, Jun Wang, Yuanqi Li, Jie Guo, Yanwen Guo
{"title":"Make PBR Materials Tileable with Latent Diffusion Inpainting.","authors":"Xiaoyu Zhan, Jianxin Yang, Jun Wang, Yuanqi Li, Jie Guo, Yanwen Guo","doi":"10.1109/TVCG.2025.3566315","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566315","url":null,"abstract":"<p><p>Physically-based-rendering (PBR) materials are crucial in modern rendering pipelines, and many studies have focused on acquiring these materials from reality or images. However, existing methods may result in non-tileable results, since the realistic inputs usually have seams. Compared to non-tileable materials, tileable PBR materials have more universal application scenarios. To address this issue, we introduce MaTi, a novel pipeline that converts non-tileable PBR materials into tileable ones with minimal distortion. MaTi rearranges material patches to align boundaries at the center of the image, and then uses a diffusion model to inpaint the seams. We use scaled gamma correction to reduce the occurrence of collapse when processing special material maps. The color correction and triangular blending are adopt to preserve the original material information. Additionally, we design a division and blending strategy to efficiently handle high resolution materials. Our experiments demonstrate that MaTi can seamlessly modify PBR materials while preserving the original information, outperforming existing synthesis methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Fuzzy-Cluster-Aware Grid Layout for Large-Scale Data. 面向大规模数据的分层模糊簇感知网格布局。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3566558
Yuxing Zhou, Changjian Chen, Zhiyang Shen, Jiangning Zhu, Jiashu Chen, Weikai Yang, Shixia Liu
{"title":"Hierarchical Fuzzy-Cluster-Aware Grid Layout for Large-Scale Data.","authors":"Yuxing Zhou, Changjian Chen, Zhiyang Shen, Jiangning Zhu, Jiashu Chen, Weikai Yang, Shixia Liu","doi":"10.1109/TVCG.2025.3566558","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566558","url":null,"abstract":"<p><p>Fuzzy clusters, where ambiguous samples belong to multiple clusters, are common in real-world applications. Analyzing such ambiguous samples in large-scale datasets is crucial for practical applications, such as diagnosing machine learning models. A promising method to support such analysis is through hierarchical cluster-aware grid visualizations, which offer high space efficiency and clear cluster perception. However, existing cluster-aware grid layout methods cannot clarify ambiguity among fuzzy clusters, which limits their effectiveness in fuzzy cluster analysis. To tackle this issue, we introduce a hierarchical fuzzy-cluster-aware grid layout method that supports hierarchical exploration of large-scale datasets. Throughout the hierarchical exploration, it is crucial to facilitate fuzzy cluster analysis while maintaining visual continuity for users. To achieve this, we propose a two-step optimization strategy for enhancing cluster perception, clarifying ambiguity, and preserving stability during the exploration. The first step is to create cluster-aware partitions, where each partition corresponds to a cluster. This step focuses on enhancing cluster perception and maintaining the previous shapes and positions of clusters to preserve stability at the cluster level. The second step is to generate a grid layout for each partition. In addition to placing similar samples together, this step also places ambiguous samples near the boundaries to clarify ambiguity and reveal the root causes of their occurrences and maintains the relative positions of the samples in the same cluster to preserve stability at the sample level. Several quantitative experiments and a use case are conducted to demonstrate the effectiveness and usefulness of our method in analyzing large-scale datasets, especially in fuzzy cluster analysis.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concept Lens: Visual Comparison and Evaluation of Generative Model Manipulations. 概念镜头:生成模型操作的视觉比较与评价。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-02 DOI: 10.1109/TVCG.2025.3564537
Sangwon Jeong, Mingwei Li, Matthew Berger, Shusen Liu
{"title":"Concept Lens: Visual Comparison and Evaluation of Generative Model Manipulations.","authors":"Sangwon Jeong, Mingwei Li, Matthew Berger, Shusen Liu","doi":"10.1109/TVCG.2025.3564537","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564537","url":null,"abstract":"<p><p>Generative models are becoming a transformative technology for the creation and editing of images. However, it remains challenging to harness these models for precise image manipulation. These challenges often manifest as inconsistency in the editing process, where both the type and amount of semantic change, depend on the image being manipulated. Moreover, there exist many methods for computing image manipulations, whose development is hindered by the matter of inconsistency. This paper aims to address these challenges by improving how we evaluate, compare, and explore the space of manipulations offered by a generative model. We present Concept Lens, a visual interface that is designed to aid users in understanding semantic concepts carried in image manipulations, and how these manipulations vary over generated images. Given the large space of possible images produced by a generative model, Concept Lens is designed to support the exploration of both generated images, and their manipulations, at multiple levels of detail. To this end, the layout of Concept Lens is informed by two hierarchies: a hierarchical organization of (1) original images, grouped by their similarities, and (2) image manipulations, where manipulations that induce similar changes are grouped together. This layout allows one to discover the types of images that consistently respond to a group of manipulations, and vice versa, manipulations that consistently respond to a group of codes. We show the benefits of this design across multiple use cases, specifically, studying the quality of manipulations for a single method, and offering a means of comparing different methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144034953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoARF++: Content-Aware Radiance Field Aligning Model Complexity With Scene Intricacy. coarf++:内容感知辐射场对准模型复杂性与场景复杂性。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-01 DOI: 10.1109/TVCG.2025.3566071
Weihang Liu, Xue Xian Zheng, Yuke Li, Tareq Y Al-Naffouri, Jingyi Yu, Xin Lou
{"title":"CoARF++: Content-Aware Radiance Field Aligning Model Complexity With Scene Intricacy.","authors":"Weihang Liu, Xue Xian Zheng, Yuke Li, Tareq Y Al-Naffouri, Jingyi Yu, Xin Lou","doi":"10.1109/TVCG.2025.3566071","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3566071","url":null,"abstract":"<p><p>This paper introduces the concept of Content-Aware Radiance Fields (CoARF), which adaptively aligns the model complexity with the scene intricacy. By examining the intricacies of radiance fields from three perspectives, model complexity is adapted through scalable feature grids, dynamic neural networks, and model quantization. Specifically, we propose a hash collision detection mechanism that removes redundant feature grid by restricting the valid hash collision to reasonable level, making the space complexity scalable. We introduce an uncertainty-aware decoded layer, where simple points are early-exited to prevent them from being processed by deeper network layers, ensuring computational complexity scalable. Furthermore, we propose Learned Bitwidth Quantization (LBQ) and Adversarial Content-Aware Quantization (A-CAQ) paradigms by making the bitwidth of parameters differentiable and trainable, allowing for adjustable quantization schemes. Building on these techniques, the proposed CoARF++ framework enables a scalable pipeline for radiance fields that is tailored to the unique characteristics of scene complexity and quality requirement. Extensive experiments demonstrate a significant and adjustable reduction in model complexity across various NeRF variants, while maintaining the necessary reconstruction and rendering quality, making it advantageous for the practical deployment of radiance field models. Codes are available at https://github.com/WeihangLiu2024/Content_Aware_NeRF.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144055924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic View Synthesis from Small Camera Motion Videos. 从小摄像机运动视频的动态视图合成。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-29 DOI: 10.1109/TVCG.2025.3565642
Huiqiang Sun, Xingyi Li, Juewen Peng, Liao Shen, Zhiguo Cao, Ke Xian, Guosheng Lin
{"title":"Dynamic View Synthesis from Small Camera Motion Videos.","authors":"Huiqiang Sun, Xingyi Li, Juewen Peng, Liao Shen, Zhiguo Cao, Ke Xian, Guosheng Lin","doi":"10.1109/TVCG.2025.3565642","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3565642","url":null,"abstract":"<p><p>Novel view synthesis for dynamic 3D scenes poses a significant challenge. Many notable efforts use NeRF-based approaches to address this task and yield impressive results. However, these methods rely heavily on sufficient motion parallax in the input images or videos. When the camera motion range becomes limited or even stationary (i.e., small camera motion), existing methods encounter two primary challenges: incorrect representation of scene geometry and inaccurate estimation of camera parameters. These challenges make prior methods struggle to produce satisfactory results or even become invalid. To address the first challenge, we propose a novel Distribution-based Depth Regularization (DDR) that ensures the rendering weight distribution to align with the true distribution. Specifically, unlike previous methods that use depth loss to calculate the error of the expectation, we calculate the expectation of the error by using Gumbel-softmax to differentiably sample points from discrete rendering weight distribution. Additionally, we introduce constraints that enforce the volume density of spatial points before the object boundary along the ray to be near zero, ensuring that our model learns the correct geometry of the scene. To demystify the DDR, we further propose a visualization tool that enables observing the scene geometry representation at the rendering weight level. For the second challenge, we incorporate camera parameter learning during training to enhance the robustness of our model to camera parameters. We conduct extensive experiments to demonstrate the effectiveness of our approach in representing scenes with small camera motion input, and our results compare favorably to state-of-the-art methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144050507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diff-3DCap: Shape Captioning With Diffusion Models. diffi - 3dcap:形状字幕与扩散模型。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-28 DOI: 10.1109/TVCG.2025.3564664
Zhenyu Shu, Jiawei Wen, Shiyang Li, Shiqing Xin, Ligang Liu
{"title":"Diff-3DCap: Shape Captioning With Diffusion Models.","authors":"Zhenyu Shu, Jiawei Wen, Shiyang Li, Shiqing Xin, Ligang Liu","doi":"10.1109/TVCG.2025.3564664","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564664","url":null,"abstract":"<p><p>The task of 3D shape captioning occupies a significant place within the domain of computer graphics and has garnered considerable interest in recent years. Traditional approaches to this challenge frequently depend on the utilization of costly voxel representations or object detection techniques, yet often fail to deliver satisfactory outcomes. To address the above challenges, in this paper, we introduce Diff-3DCap, which employs a sequence of projected views to represent a 3D object and a continuous diffusion model to facilitate the captioning process. More precisely, our approach utilizes the continuous diffusion model to perturb the embedded captions during the forward phase by introducing Gaussian noise and then predicts the reconstructed annotation during the reverse phase. Embedded within the diffusion framework is a commitment to leveraging a visual embedding obtained from a pre-trained visual-language model, which naturally allows the embedding to serve as a guiding signal, eliminating the need for an additional classifier. Extensive results of our experiments indicate that Diff-3DCap can achieve performance comparable to that of the current state-of-the-art methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144028380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiplane-based Cross-view Interaction Mechanism for Robust Light Field Angular Super-Resolution. 基于多平面的鲁棒光场角超分辨率交叉视场相互作用机制。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-28 DOI: 10.1109/TVCG.2025.3564643
Rongshan Chen, Hao Sheng, Da Yang, Ruixuan Cong, Zhenglong Cui, Sizhe Wang, Wei Ke
{"title":"Multiplane-based Cross-view Interaction Mechanism for Robust Light Field Angular Super-Resolution.","authors":"Rongshan Chen, Hao Sheng, Da Yang, Ruixuan Cong, Zhenglong Cui, Sizhe Wang, Wei Ke","doi":"10.1109/TVCG.2025.3564643","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564643","url":null,"abstract":"<p><p>Dense sampling of the light field (LF) is essential for various applications, such as virtual reality. However, the collection process is prohibitively expensive due to technological limitations in imaging. Synthesizing novel views from sparse LF data, known as LF Angular Super-Resolution (LFASR), offers an effective solution to this problem. Accurate cross-view interaction is crucial for this task, given the complementary information between LF views. Previous methods, however, suffer from limited reconstruction quality due to inefficient view interaction. To address this, we propose a Multiplane-based Cross-view Interaction Mechanism (MCIM) for robust LFASR. Extensive comparisons with state-of-the-art methods demonstrate that our method achieves superior performance, both visually and quantitatively. Specifically, Drawing inspiration from MultiPlane Images (MPI) in scene modeling, our mechanism incorporates a novel Multiplane Feature Fusion (MPFF) strategy. This strategy facilitates fast and accurate cross-view interaction, enhancing the network's robustness to scene geometry and suitability for different-baseline LF scenes. Furthermore, to address information redundancy in multiplanes, we leverage the transparency property of MPI and devise a plane selection strategy. Finally, we propose CSTNet, a Cross-Shaped Transformer-based network for LFASR, which employs a cross-shaped self-attention mechanism to enable low-cost training and inference. Experimental results on various angular super-resolution tasks validate that our network achieves state-of-the-art performance on both synthetic and real-world LF scenes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VIGMA: An Open-Access Framework for Visual Gait and Motion Analytics. VIGMA:用于视觉步态和运动分析的开放存取框架。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-28 DOI: 10.1109/TVCG.2025.3564866
Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda
{"title":"VIGMA: An Open-Access Framework for Visual Gait and Motion Analytics.","authors":"Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda","doi":"10.1109/TVCG.2025.3564866","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564866","url":null,"abstract":"<p><p>Gait disorders are commonly observed in older adults, who frequently experience various issues related to walking. Additionally, researchers and clinicians extensively investigate mobility related to gait in typically and atypically developing children, athletes, and individuals with orthopedic and neurological disorders. Effective gait analysis enables the understanding of the causal mechanisms of mobility and balance control of patients, the development of tailored treatment plans to improve mobility, the reduction of fall risk, and the tracking of rehabilitation progress. However, analyzing gait data is a complex task due to the multivariate nature of the data, the large volume of information to be interpreted, and the technical skills required. Existing tools for gait analysis are often limited to specific patient groups (e.g., cerebral palsy), only handle a specific subset of tasks in the entire workflow, and are not openly accessible. To address these shortcomings, we conducted a requirements assessment with gait practitioners (e.g., researchers, clinicians) via surveys and identified key components of the workflow, including (1) data processing and (2) data analysis and visualization. Based on the findings, we designed VIGMA, an open-access visual analytics framework integrated with computational notebooks and a Python library, to meet the identified requirements. Notably, the framework supports analytical capabilities for assessing disease progression and for comparing multiple patient groups. We validated the framework through usage scenarios with experts specializing in gait and mobility rehabilitation. VIGMA is available at https://github.com/komar41/vigma.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144012217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 Introducing the Special Issue IEEE VR 2025专题介绍
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544902
Han-Wei Shen;Kiyoshi Kiyokawa;Maud Marchal
{"title":"IEEE VR 2025 Introducing the Special Issue","authors":"Han-Wei Shen;Kiyoshi Kiyokawa;Maud Marchal","doi":"10.1109/TVCG.2025.3544902","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544902","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"x-x"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces IEEE可视化与计算机图形学汇刊:2025年IEEE虚拟现实与3D用户界面会议
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544911
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544911","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544911","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xvii-xxix"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977056","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信