Computer Graphics Forum最新文献

筛选
英文 中文
GauLoc: 3D Gaussian Splatting-based Camera Relocalization GauLoc:基于高斯拼接的三维相机重定位
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15256
Zhe Xin, Chengkai Dai, Ying Li, Chenming Wu
{"title":"GauLoc: 3D Gaussian Splatting-based Camera Relocalization","authors":"Zhe Xin,&nbsp;Chengkai Dai,&nbsp;Ying Li,&nbsp;Chenming Wu","doi":"10.1111/cgf.15256","DOIUrl":"https://doi.org/10.1111/cgf.15256","url":null,"abstract":"<p>3D Gaussian Splatting (3DGS) has emerged as a promising representation for scene reconstruction and novel view synthesis for its explicit representation and real-time capabilities. This technique thus holds immense potential for use in mapping applications. Consequently, there is a growing need for an efficient and effective camera relocalization method to complement the advantages of 3DGS. This paper presents a camera relocalization method, namely GauLoc, in a scene represented by 3DGS. Unlike previous methods that rely on pose regression or photometric alignment, our proposed method leverages the differential rendering capability provided by 3DGS. The key insight of our work is the proposed implicit featuremetric alignment, which effectively optimizes the alignment between rendered keyframes and the query frames, and leverages the epipolar geometry to facilitate the convergence of camera poses conditioned explicit 3DGS representation. The proposed method significantly improves the relocalization accuracy even in complex scenarios with large initial camera rotation and translation deviations. Extensive experiments validate the effectiveness of our proposed method, showcasing its potential to be applied in many real-world applications. Source code will be released at https://github.com/xinzhe11/GauLoc.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CustomSketching: Sketch Concept Extraction for Sketch-based Image Synthesis and Editing 自定义素描:为基于素描的图像合成和编辑提取素描概念
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15247
Chufeng Xiao, Hongbo Fu
{"title":"CustomSketching: Sketch Concept Extraction for Sketch-based Image Synthesis and Editing","authors":"Chufeng Xiao,&nbsp;Hongbo Fu","doi":"10.1111/cgf.15247","DOIUrl":"https://doi.org/10.1111/cgf.15247","url":null,"abstract":"<div>\u0000 \u0000 <p>Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images. However, existing methods primarily rely on textual descriptions, leading to limited control over customized images and failing to support fine-grained and local editing (e.g., shape, pose, and details). In this paper, we identify sketches as an intuitive and versatile representation that can facilitate such control, e.g., contour lines capturing shape information and flow lines representing texture. This motivates us to explore a novel task of sketch concept extraction: given one or more sketch-image pairs, we aim to extract a special sketch concept that bridges the correspondence between the images and sketches, thus enabling sketch-based image synthesis and editing at a fine-grained level. To accomplish this, we introduce CustomSketching, a two-stage framework for extracting novel sketch concepts via few-shot learning. Considering that an object can often be depicted by a contour for general shapes and additional strokes for internal details, we introduce a <i>dual-sketch</i> representation to reduce the inherent ambiguity in sketch depiction. We employ a shape loss and a regularization loss to balance fidelity and editability during optimization. Through extensive experiments, a user study, and several applications, we show our method is effective and superior to the adapted baselines.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15247","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightUrban: Similarity Based Fine-grained Instancing for Lightweighting Complex Urban Point Clouds LightUrban:基于相似性的细粒度实例化,实现复杂城市点云的轻量化
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15238
Z.A. Lu, W.D. Xiong, P. Ren, J.Y. Jia
{"title":"LightUrban: Similarity Based Fine-grained Instancing for Lightweighting Complex Urban Point Clouds","authors":"Z.A. Lu,&nbsp;W.D. Xiong,&nbsp;P. Ren,&nbsp;J.Y. Jia","doi":"10.1111/cgf.15238","DOIUrl":"https://doi.org/10.1111/cgf.15238","url":null,"abstract":"<p>Large-scale urban point clouds play a vital role in various applications, while rendering and transmitting such data remains challenging due to its large volume, complicated structures, and significant redundancy. In this paper, we present <i>LightUrban</i>, the first point cloud instancing framework for efficient rendering and transmission of fine-grained complex urban scenes. We first introduce a segmentation method to organize the point clouds into individual buildings and vegetation instances from coarse to fine. Next, we propose an unsupervised similarity detection approach to accurately group instances with similar shapes. Furthermore, a fast pose and size estimation component is applied to calculate the transformations between the representative instance and the corresponding similar instances in each group. By replacing individual instances with their group's representative instances, the data volume and redundancy can be dramatically reduced. Experimental results on large-scale urban scenes demonstrate the effectiveness of our algorithm. To sum up, our method not only structures the urban point clouds but also significantly reduces data volume and redundancy, filling the gap in lightweighting urban landscapes through instancing.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Diffusion-based Motion In-betweening 基于扩散的鲁棒性运动中间处理
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15260
Jia Qin, Peng Yan, Bo An
{"title":"Robust Diffusion-based Motion In-betweening","authors":"Jia Qin,&nbsp;Peng Yan,&nbsp;Bo An","doi":"10.1111/cgf.15260","DOIUrl":"https://doi.org/10.1111/cgf.15260","url":null,"abstract":"<p>The emergence of learning-based motion in-betweening techniques offers animators a more efficient way to animate characters. However, existing non-generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high-quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusion-based motion in-betweening framework that generates animations conforming to keyframe constraints. We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in-betweening tasks. This approach enables the model to learn from short animations while generating realistic in-betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K-FID, K-Diversity, and K-Error, designed to evaluate generative in-betweening methods. Results demonstrate that our method outperforms existing diffusion-based methods across various lengths and keyframe densities. We also show that our method can be applied to text-driven motion synthesis, offering fine-grained control over the generated results.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Surface-based Appearance Model for Pennaceous Feathers 基于表面的笔状羽毛外观模型
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15235
Juan Raúl Padrón-Griffe, Dario Lanza, Adrián Jarabo, Adolfo Muñoz
{"title":"A Surface-based Appearance Model for Pennaceous Feathers","authors":"Juan Raúl Padrón-Griffe,&nbsp;Dario Lanza,&nbsp;Adrián Jarabo,&nbsp;Adolfo Muñoz","doi":"10.1111/cgf.15235","DOIUrl":"https://doi.org/10.1111/cgf.15235","url":null,"abstract":"<div>\u0000 \u0000 <p>The appearance of a real-world feather results from the complex interaction of light with its multi-scale biological structure, including the central shaft, branching barbs, and interlocking barbules on those barbs. In this work, we propose a practical surface-based appearance model for feathers. We represent the far-field appearance of feathers using a BSDF that implicitly represents the light scattering from the main biological structures of a feather, such as the shaft, barb and barbules. Our model accounts for the particular characteristics of feather barbs such as the non-cylindrical cross-sections and the scattering media via a numerically-based BCSDF. To model the relative visibility between barbs and barbules, we derive a masking term for the differential projected areas of the different components of the feather's microgeometry, which allows us to analytically compute the masking between barbs and barbules. As opposed to previous works, our model uses a lightweight representation of the geometry based on a 2D texture, and does not require explicitly representing the barbs as curves. We show the flexibility and potential of our appearance model approach to represent the most important visual features of several pennaceous feathers.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color-Accurate Camera Capture with Multispectral Illumination and Multiple Exposures 利用多光谱照明和多重曝光进行色彩精确相机捕捉
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15252
H. Gao, R. K. Mantiuk, G. D. Finlayson
{"title":"Color-Accurate Camera Capture with Multispectral Illumination and Multiple Exposures","authors":"H. Gao,&nbsp;R. K. Mantiuk,&nbsp;G. D. Finlayson","doi":"10.1111/cgf.15252","DOIUrl":"https://doi.org/10.1111/cgf.15252","url":null,"abstract":"<div>\u0000 \u0000 <p>Cameras cannot capture the same colors as those seen by the human eye because the eye and the cameras' sensors differ in their spectral sensitivity. To obtain a plausible approximation of perceived colors, the camera's Image Signal Processor (ISP) employs a color correction step. However, even advanced color correction methods cannot solve this underdetermined problem, and visible color inaccuracies are always present. Here, we explore an approach in which we can capture accurate colors with a regular camera by optimizing the spectral composition of the illuminant and capturing one or more exposures. We jointly optimize for the signal-to-noise ratio and for the color accuracy irrespective of the spectral composition of the scene. One or more images captured under controlled multispectral illuminants are then converted into a color-accurate image as seen under the standard illuminant of D65. Our optimization allows us to reduce the color error by 20–60% (in terms of CIEDE 2000), depending on the number of exposures and camera type. The method can be used in applications in which illumination can be controlled, and high colour accuracy is required, such as product photography or with a multispectral camera flash. The code is available at https://github.com/gfxdisp/multispectral_color_correction.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P-Hologen: An End-to-End Generative Framework for Phase-Only Holograms P-Hologen:仅相位全息图的端到端生成框架
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15244
JooHyun Park, YuJin Jeon, HuiYong Kim, SeungHwan Baek, HyeongYeop Kang
{"title":"P-Hologen: An End-to-End Generative Framework for Phase-Only Holograms","authors":"JooHyun Park,&nbsp;YuJin Jeon,&nbsp;HuiYong Kim,&nbsp;SeungHwan Baek,&nbsp;HyeongYeop Kang","doi":"10.1111/cgf.15244","DOIUrl":"https://doi.org/10.1111/cgf.15244","url":null,"abstract":"<div>\u0000 \u0000 <p>Holography stands at the forefront of visual technology, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase. Although generative models have been extensively explored in the image domain, their application to holograms remains relatively underexplored due to the inherent complexity of phase learning. Exploiting generative models for holograms offers exciting opportunities for advancing innovation and creativity, such as semantic-aware hologram generation and editing. Currently, the most viable approach for utilizing generative models in the hologram domain involves integrating an image-based generative model with an image-to-hologram conversion model, which comes at the cost of increased computational complexity and inefficiency. To tackle this problem, we introduce P-Hologen, the first end-to-end generative framework designed for phase-only holograms (POHs). P-Hologen employs vector quantized variational autoencoders to capture the complex distributions of POHs. It also integrates the angular spectrum method into the training process, constructing latent spaces for complex phase data using strategies from the image processing domain. Extensive experiments demonstrate that P-Hologen achieves superior quality and computational efficiency compared to the existing methods. Furthermore, our model generates high-quality unseen, diverse holographic content from its learned latent space without requiring pre-existing images. Our work paves the way for new applications and methodologies in holographic content creation, opening a new era in the exploration of generative holographic content. The code for our paper is publicly available on https://github.com/james0223/P-Hologen.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15244","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Palette-Based Recolouring of Gradient Meshes 基于调色板的渐变网格再着色
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15258
Willard A. Verschoore de la Houssaije, Jose Echevarria, Jiří Kosinka
{"title":"Palette-Based Recolouring of Gradient Meshes","authors":"Willard A. Verschoore de la Houssaije,&nbsp;Jose Echevarria,&nbsp;Jiří Kosinka","doi":"10.1111/cgf.15258","DOIUrl":"https://doi.org/10.1111/cgf.15258","url":null,"abstract":"<div>\u0000 \u0000 <p>Gradient meshes are a vector graphics primitive formed by a regular grid of bicubic quad patches. They allow for the creation of complex geometries and colour gradients, with recent extensions supporting features such as local refinement and sharp colour transitions. While many methods exist for recolouring raster images, often achieved by modifying an automatically detected palette of the image, gradient meshes have not received the same amount of attention when it comes to global colour editing. We present a novel method that allows for real-time palette-based recolouring of gradient meshes, including gradient meshes constructed using local refinement and containing sharp colour transitions. We demonstrate the utility of our method on synthetic illustrative examples as well as on complex gradient meshes.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VRTree: Example-Based 3D Interactive Tree Modeling in Virtual Reality VRTree:虚拟现实中基于实例的 3D 交互式树建模
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15254
Di Wu, Mingxin Yang, Zhihao Liu, Fangyuan Tu, Fang Liu, Zhanglin Cheng
{"title":"VRTree: Example-Based 3D Interactive Tree Modeling in Virtual Reality","authors":"Di Wu,&nbsp;Mingxin Yang,&nbsp;Zhihao Liu,&nbsp;Fangyuan Tu,&nbsp;Fang Liu,&nbsp;Zhanglin Cheng","doi":"10.1111/cgf.15254","DOIUrl":"https://doi.org/10.1111/cgf.15254","url":null,"abstract":"<p>We present VRTree, an example-based interactive virtual reality (VR) system designed to efficiently create diverse 3D tree models while faithfully preserving botanical characteristics of real-world references. Our method employs a novel representation called Hierarchical Branch Lobe (HBL), which captures the hierarchical features of trees and serves as a versatile intermediary for intuitive VR interaction. The HBL representation decomposes a 3D tree into a series of concise examples, each consisting of a small set of main branches, secondary branches, and lobe-bounded twigs. The core of our system involves two key components: (1) We design an automatic algorithm to extract an initial library of HBL examples from real tree point clouds. These HBL examples can be optionally refined according to user intentions through an interactive editing process. (2) Users can interact with the extracted HBL examples to assemble new tree structures, ensuring the local features align with the target tree species. A shape-guided procedural growth algorithm then transforms these assembled HBL structures into highly realistic, finegrained 3D tree models. Extensive experiments and user studies demonstrate that VRTree outperforms current state-of-the-art approaches, offering a highly effective and easy-to-use VR tool for tree modeling.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inverse Garment and Pattern Modeling with a Differentiable Simulator 利用可微分模拟器进行服装和图案逆向建模
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15249
Boyang Yu, Frederic Cordier, Hyewon Seo
{"title":"Inverse Garment and Pattern Modeling with a Differentiable Simulator","authors":"Boyang Yu,&nbsp;Frederic Cordier,&nbsp;Hyewon Seo","doi":"10.1111/cgf.15249","DOIUrl":"https://doi.org/10.1111/cgf.15249","url":null,"abstract":"<div>\u0000 \u0000 <p>The capability to generate simulation-ready garment models from 3D shapes of clothed people will significantly enhance the interpretability of captured geometry of real garments, as well as their faithful reproduction in the digital world. This will have notable impact on fields like shape capture in social VR, and virtual try-on in the fashion industry. To align with the garment modeling process standardized by the fashion industry and cloth simulation software, it is required to recover 2D patterns, which are then placed around the wearer's body model and seamed prior to the draping simulation. This involves an inverse garment design problem, which is the focus of our work here: Starting with an arbitrary target garment geometry, our system estimates its animatable replica along with its corresponding 2D pattern. Built upon a differentiable cloth simulator, it runs an optimization process that is directed towards minimizing the deviation of the simulated garment shape from the target geometry, while maintaining desirable properties such as left-to-right symmetry. Experimental results on various real-world and synthetic data show that our method outperforms state-of-the-art methods in producing both high-quality garment models and accurate 2D patterns.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15249","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信