IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Stylizing Sparse-View 3D Scenes with Hierarchical Neural Representation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-07 DOI: 10.1109/TVCG.2025.3558468
Yifan Wang, Ang Gao, Yi Gong, Yuan Zeng
{"title":"Stylizing Sparse-View 3D Scenes with Hierarchical Neural Representation.","authors":"Yifan Wang, Ang Gao, Yi Gong, Yuan Zeng","doi":"10.1109/TVCG.2025.3558468","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3558468","url":null,"abstract":"<p><p>3D scene stylization refers to generating stylized images of the scene at arbitrary novel view angles following a given set of style images while ensuring consistency when rendered from different views. Recently, a surge of 3D style transfer methods has been proposed that leverage the scene reconstruction power of a pre-trained neural radiance field (NeRF). To successfully stylize a scene this way, one must first reconstruct a photo-realistic radiance field from collected images of the scene. However, when only sparse input views are available, pre-trained few-shot NeRFs often suffer from high-frequency artifacts, which are generated as a by-product of high-frequency details for improving reconstruction quality. Is it possible to generate more faithful stylized scenes from sparse inputs by directly optimizing encoding-based scene representation with target style? In this paper, we consider the stylization of sparseview scenes in terms of disentangling content semantics and style textures. We propose a coarse-to-fine sparse-view scene stylization framework, where a novel hierarchical encoding-based neural representation is designed to generate high-quality stylized scenes directly from implicit scene representations. We also propose a new optimization strategy with content strength annealing to achieve realistic stylization and better content preservation. Extensive experiments demonstrate that our method can achieve high-quality stylization of sparse-view scenes and outperforms fine-tuning-based baselines in terms of stylization quality and efficiency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Smooth Vector Graphics: Modeling Gradient Meshes and Curve-based Approaches Jointly as Poisson Problem.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-04 DOI: 10.1109/TVCG.2025.3558263
Xingze Tian, Tobias Gunther
{"title":"Unified Smooth Vector Graphics: Modeling Gradient Meshes and Curve-based Approaches Jointly as Poisson Problem.","authors":"Xingze Tian, Tobias Gunther","doi":"10.1109/TVCG.2025.3558263","DOIUrl":"10.1109/TVCG.2025.3558263","url":null,"abstract":"<p><p>Research on smooth vector graphics is separated into two independent research threads: one on interpolationbased gradient meshes and the other on diffusion-based curve formulations. With this paper, we propose a mathematical formulation that unifies gradient meshes and curve-based approaches as solution to a Poisson problem. To combine these two well-known representations, we first generate a non-overlapping intermediate patch representation that specifies for each patch a target Laplacian and boundary conditions. Unifying the treatment of boundary conditions adds further artistic degrees of freedoms to the existing formulations, such as Neumann conditions on diffusion curves. To synthesize a raster image for a given output resolution, we then rasterize boundary conditions and Laplacians for the respective patches and compute the final image as solution to a Poisson problem. We evaluate the method on various test scenes containing gradient meshes and curve-based primitives. Since our mathematical formulation works with established smooth vector graphics primitives on the front-end, it is compatible with existing content creation pipelines and with established editing tools. Rather than continuing two separate research paths, we hope that a unification of the formulations will lead to new rasterization and vectorization tools in the future that utilize the strengths of both approaches.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143784663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Presence Continuum: Portal Overlays for Overlapping Worlds in Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-04 DOI: 10.1109/TVCG.2025.3558178
Daniel Ablett, Andrew Cunningham, Gun A Lee, Bruce H Thomas
{"title":"Simultaneous Presence Continuum: Portal Overlays for Overlapping Worlds in Virtual Reality.","authors":"Daniel Ablett, Andrew Cunningham, Gun A Lee, Bruce H Thomas","doi":"10.1109/TVCG.2025.3558178","DOIUrl":"10.1109/TVCG.2025.3558178","url":null,"abstract":"<p><p>This paper introduces the Simultaneous Presence (SP) Continuum, a novel concept designed to enhance the use of portals in virtual reality (VR) by understanding users' experiences across multiple environments. Portals traditionally provide access to secondary worlds, but their limited field of view (FoV) can hinder full engagement with these environments. To overcome this, we introduce portal overlays, which expand the portal's FoV by superimposing the secondary world over the user's view of the primary world. We explore several overlay techniques-Contours, Blended (Opacity and Stencil), and Absolute-to adjust user experience across the SP Continuum. The Contours overlay, renders only the edges of objects, offering a minimalistic view and immersion of the secondary world. The Blended overlay allows for adjustable immersion between worlds through opacity or the distribution of pixels. The Absolute overlay virtually immerses users completely in the secondary world. Importantly, these overlays are context-activated instead of always on, to suit situational needs. Our study revealed high SP was possible in both worlds, and overlays significantly impacted users' experiences on the SP Continuum. The Blended overlay provided balanced SP, while Contours had the highest primary presence but lower secondary presence. In contrast, the Absolute overlay, alternating full immersion between worlds, had the highest secondary presence but resulted in reduced primary presence, slower navigation and selection times, higher effort and frustration, and lower usability.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143784661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visagreement: Visualizing and Exploring Explanations (Dis)Agreement.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-04 DOI: 10.1109/TVCG.2025.3558074
Priscylla Silva, Vitoria Guardieiro, Brian Barr, Claudio Silva, Luis Gustavo Nonato
{"title":"Visagreement: Visualizing and Exploring Explanations (Dis)Agreement.","authors":"Priscylla Silva, Vitoria Guardieiro, Brian Barr, Claudio Silva, Luis Gustavo Nonato","doi":"10.1109/TVCG.2025.3558074","DOIUrl":"10.1109/TVCG.2025.3558074","url":null,"abstract":"<p><p>The emergence of distinct machine learning explanation methods has leveraged a number of new issues to be investigated. The disagreement problem is one such issue, as there may be scenarios where the output of different explanation methods disagree with each other. Although understanding how often, when, and where explanation methods agree or disagree is important to increase confidence in the explanations, few works have been dedicated to investigating such a problem. In this work, we proposed Visagreement, a visualization tool designed to assist practitioners in investigating the disagreement problem. Visagreement builds upon metrics to quantitatively compare and evaluate explanations, enabling visual resources to uncover where and why methods mostly agree or disagree. The tool is tailored for tabular data with binary classification and focuses on local feature importance methods. In the provided use cases, Visagreement turned out to be effective in revealing, among other phenomena, how disagreements relate to the quality of the explanations and machine learning model accuracy, thus assisting users in deciding where and when to trust explanations. To assess the effectiveness and practical utility of Visagreement, we conducted an evaluation involving four experts. These experts assessed the tool's Effectiveness, Usability, and Impact on Decision-Making. The experts confirm the Visagreement tool's effectiveness and user-friendliness, making it a valuable asset for analyzing and exploring (dis)agreements.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143784664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StruGauAvatar: Learning Structured 3D Gaussians for Animatable Avatars from Monocular Videos.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-03 DOI: 10.1109/TVCG.2025.3557457
Yihao Zhi, Wanhu Sun, Jiahao Chang, Chongjie Ye, Wensen Feng, Xiaoguang Han
{"title":"StruGauAvatar: Learning Structured 3D Gaussians for Animatable Avatars from Monocular Videos.","authors":"Yihao Zhi, Wanhu Sun, Jiahao Chang, Chongjie Ye, Wensen Feng, Xiaoguang Han","doi":"10.1109/TVCG.2025.3557457","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3557457","url":null,"abstract":"<p><p>In recent years, significant progress has been witnessed in the field of neural 3D avatar reconstruction. Among all related tasks, building an animatable avatar from monocular videos is one of the most challenging ones, yet it also has a wide range of applications. The \"animatable\" means that we need to transfer any arbitrary and unseen poses onto the avatar and generate new 3D videos. Thanks to the rise of the powerful representation of NeRF, generating a high-fidelity animatable avatar from videos has become easier and more accessible. Despite their impressive visual results, the substantial training and rendering overhead dramatically hamper their applications. 3D Gaussian Splatting, as a timely new representation, has demonstrated its high-quality and high-efficiency rendering. This has led to many concurrent works to introduce 3D-GS to animatable avatar building. Although they demonstrate very high-fidelity renderings for poses similar to the training video frames, poor results are produced when the poses are far from training. We argue that this is primarily because the Gaussian points lack structures. Thus, we suggest involving DMTet to represent the coarse geometry of the avatar. In our representation, the majority of Gaussian points are bound to the mesh vertices, while some free Gaussian is allowed to expand to better fit the given video. Furthermore, we develop a dual-space optimization framework to jointly optimize the DMTet, Gaussian points, and skinning weights under two spaces. In this sense, Gaussian points are deformed in a constrained way, which dramatically improves the generalization ability for unseen poses. This is well demonstrated via extensive experiments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Palette-based color harmonization. 基于调色板的色彩协调
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-02 DOI: 10.1109/TVCG.2025.3546210
Jianchao Tan, Jose Echevarria, Yotam Gingold
{"title":"Palette-based color harmonization.","authors":"Jianchao Tan, Jose Echevarria, Yotam Gingold","doi":"10.1109/TVCG.2025.3546210","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3546210","url":null,"abstract":"<p><p>We present a palette-based framework for color composition for visual applications and three large-scale, wide-ranging perceptual studies on the perception of color harmonization. We abstract relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. Our framework provides a basis for interactive color-aware operations such as color harmonization of images and videos. Because our approach to harmonization is palette-based, we are able to conduct the first controlled perceptual experiments evaluating preferences for harmonized images and color palettes. In a third study, we compare preference for archetypical harmonic palettes. In total, our studies involved over 1000 participants. We found that participants do not prefer harmonized images and that some archetypal palettes are reliably viewed as less harmonious than random palettes. These studies raise important questions for research and artistic practice.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelize Over Data Particle Advection: Participation, Ping Pong Particles, and Overhead.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-02 DOI: 10.1109/TVCG.2025.3557453
Zhe Wang, Kenneth Moreland, Matthew Larsen, James Kress, Hank Childs, David Pugmire
{"title":"Parallelize Over Data Particle Advection: Participation, Ping Pong Particles, and Overhead.","authors":"Zhe Wang, Kenneth Moreland, Matthew Larsen, James Kress, Hank Childs, David Pugmire","doi":"10.1109/TVCG.2025.3557453","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3557453","url":null,"abstract":"<p><p>Particle advection is one of the foundational algorithms for visualization and analysis and is central to understanding vector fields common to scientific simulations. Achieving efficient performance with large data in a distributed memory setting is notoriously difficult. Because of its simplicity and minimized movement of large vector field data, the Parallelize over Data (POD) algorithm has become a de facto standard. Despite its simplicity and ubiquitous usage, the scaling issues with the POD algorithm are known and have been described throughout the literature. In this paper, we describe a set of in-depth analyses of the POD algorithm that shed new light on the underlying causes for the poor performance of this algorithm. We designed a series of representative workloads to study the performance of the POD algorithm and executed them on a supercomputer while collecting timing and statistical data for analysis. we then performed two different types of analysis. In the first analysis, we introduce two novel metrics for measuring algorithmic efficiency over the course of a workload run. The second analysis was from the perspective of the particles being advected. Using particlecentric analysis, we identify that the overheads associated with particle movement between processes (not the communication itself) have a dramatic impact on the overall execution time. These overheads become particularly costly when flow features span multiple blocks, resulting in repeated particle circulation (which we term \"ping pong particles\") between blocks. Our findings shed important light on the underlying causes of poor performance and offer directions for future research to address these limitations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaussEdit: Adaptive 3D Scene Editing with Text and Image Prompts.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-01 DOI: 10.1109/TVCG.2025.3556745
Zhenyu Shu, Junlong Yu, Kai Chao, Shiqing Xin, Ligang Liu
{"title":"GaussEdit: Adaptive 3D Scene Editing with Text and Image Prompts.","authors":"Zhenyu Shu, Junlong Yu, Kai Chao, Shiqing Xin, Ligang Liu","doi":"10.1109/TVCG.2025.3556745","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3556745","url":null,"abstract":"<p><p>This paper presents GaussEdit, a framework for adaptive 3D scene editing guided by text and image prompts. GaussEdit leverages 3D Gaussian Splatting as its backbone for scene representation, enabling convenient Region of Interest selection and efficient editing through a three-stage process. The first stage involves initializing the 3D Gaussians to ensure high-quality edits. The second stage employs an Adaptive Global-Local Optimization strategy to balance global scene coherence and detailed local edits and a category-guided regularization technique to alleviate the Janus problem. The final stage enhances the texture of the edited objects using a sophisticated image-to-image synthesis technique, ensuring that the results are visually realistic and align closely with the given prompts. Our experimental results demonstrate that GaussEdit surpasses existing methods in editing accuracy, visual fidelity, and processing speed. By successfully embedding user-specified concepts into 3D scenes, GaussEdit is a powerful tool for detailed and user-driven 3D scene editing, offering significant improvements over traditional methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Frequency Awareness Functional Maps for Robust Shape Matching.
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-01 DOI: 10.1109/TVCG.2025.3556209
Feifan Luo, Qinsong Li, Ling Hu, Haibo Wang, Haojun Xu, Xinru Liu, Shengjun Liu, Hongyang Chen
{"title":"Deep Frequency Awareness Functional Maps for Robust Shape Matching.","authors":"Feifan Luo, Qinsong Li, Ling Hu, Haibo Wang, Haojun Xu, Xinru Liu, Shengjun Liu, Hongyang Chen","doi":"10.1109/TVCG.2025.3556209","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3556209","url":null,"abstract":"<p><p>Traditional deep functional map frameworks are widely used for 3D shape matching; however, many methods fail to adaptively capture the relevant frequency information required for functional map estimation in complex scenarios, leading to poor performance, especially under significant deformations. To address these challenges, we propose a novel unsupervised learning-based framework, Deep Frequency Awareness Functional Maps (DFAFM), specifically designed to tackle diverse shape-matching problems. Our approach introduces the Spectral Filter Operator Preservation constraint, which ensures the preservation of critical frequency information. These constraints promote frequency awareness by learning a set of spectral filters and incorporating them as a loss function to jointly supervise the functional maps, pointwise maps, and spectral filters. The spectral filters are constructed using orthonormal Jacobi polynomials with learnable coefficients, enabling adaptive and efficient frequency representation. Furthermore, we propose a refinement strategy that leverages the learned spectral filters and constraints to enhance the accuracy of the final pointwise map. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms state-of-the-art approaches, particularly in challenging scenarios involving non-isometric deformations and inconsistent topology. Our code is available at https://github.com/LuoFeifan77/DeepFAFM.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IllumiDiff: Indoor Illumination Estimation from a Single Image with Diffusion Model.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-31 DOI: 10.1109/TVCG.2025.3553853
Shiyuan Shen, Zhongyun Bao, Wenju Xu, Chunxia Xiao
{"title":"IllumiDiff: Indoor Illumination Estimation from a Single Image with Diffusion Model.","authors":"Shiyuan Shen, Zhongyun Bao, Wenju Xu, Chunxia Xiao","doi":"10.1109/TVCG.2025.3553853","DOIUrl":"10.1109/TVCG.2025.3553853","url":null,"abstract":"<p><p>Illumination estimation from a single indoor image is a promising yet challenging task. Existing indoor illumination estimation methods mainly regress lighting parameters or infer a panorama from a limited field-of-view image. Nevertheless, these methods fail to recover a panorama with both well-distributed illumination and detailed environment textures, leading to a lack of realism in rendering the embedded 3D objects with complex materials. This paper presents a novel multi-stage illumination estimation framework named IllumiDiff. Specifically, in Stage I, we first estimate illumination conditions from the input image, including the illumination distribution as well as the environmental texture of the scene. In Stage II, guided by the estimated illumination conditions, we design a conditional panoramic texture diffusion model to generate a high-quality LDR panorama. In Stage III, we leverage the illumination conditions to further reconstruct the LDR panorama to an HDR panorama. Extensive experiments demonstrate that our IllumiDiff can generate an HDR panorama with realistic illumination distribution and rich texture details from a single limited field-of-view indoor image. The generated panorama can produce impressive rendering results for the embedded 3D objects with various materials.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信