Computer Graphics Forum最新文献

筛选
英文 中文
GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization GSEditPro:利用基于注意力的渐进定位进行三维高斯拼接编辑
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15215
Y. Sun, R. Tian, X. Han, X. Liu, Y. Zhang, K. Xu
{"title":"GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization","authors":"Y. Sun,&nbsp;R. Tian,&nbsp;X. Han,&nbsp;X. Liu,&nbsp;Y. Zhang,&nbsp;K. Xu","doi":"10.1111/cgf.15215","DOIUrl":"https://doi.org/10.1111/cgf.15215","url":null,"abstract":"<p>With the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps 用于无监督深度函数图谱的多尺度光谱频谱小波规整器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15230
Shengjun Liu, Jing Meng, Ling Hu, Yueyu Guo, Xinru Liu, Xiaoxia Yang, Haibo Wang, Qinsong Li
{"title":"Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps","authors":"Shengjun Liu,&nbsp;Jing Meng,&nbsp;Ling Hu,&nbsp;Yueyu Guo,&nbsp;Xinru Liu,&nbsp;Xiaoxia Yang,&nbsp;Haibo Wang,&nbsp;Qinsong Li","doi":"10.1111/cgf.15230","DOIUrl":"https://doi.org/10.1111/cgf.15230","url":null,"abstract":"<p>In deep functional maps, the regularizer computing the functional map is especially crucial for ensuring the global consistency of the computed pointwise map. As the regularizers integrated into deep learning should be differentiable, it is not trivial to incorporate informative axiomatic structural constraints into the deep functional map, such as the orientation-preserving term. Although commonly used regularizers include the Laplacian-commutativity term and the resolvent Laplacian commutativity term, these are limited to single-scale analysis for capturing geometric information. To this end, we propose a novel and theoretically well-justified regularizer commuting the functional map with the multiscale spectral manifold wavelet operator. This regularizer enhances the isometric constraints of the functional map and is conducive to providing it with better structural properties with multiscale analysis. Furthermore, we design an unsupervised deep functional map with the regularizer in a fully differentiable way. The quantitative and qualitative comparisons with several existing techniques on the (near-)isometric and non-isometric datasets show our method's superior accuracy and generalization capabilities. Additionally, we illustrate that our regularizer can be easily inserted into other functional map methods and improve their accuracy.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinguishing Structures from Textures by Patch-based Contrasts around Pixels for High-quality and Efficient Texture filtering 通过像素周围基于斑块的对比度从纹理中区分结构,实现高质量、高效率的纹理过滤
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15212
Shengchun Wang, Panpan Xu, Fei Hou, Wencheng Wang, Chong Zhao
{"title":"Distinguishing Structures from Textures by Patch-based Contrasts around Pixels for High-quality and Efficient Texture filtering","authors":"Shengchun Wang,&nbsp;Panpan Xu,&nbsp;Fei Hou,&nbsp;Wencheng Wang,&nbsp;Chong Zhao","doi":"10.1111/cgf.15212","DOIUrl":"https://doi.org/10.1111/cgf.15212","url":null,"abstract":"<p>It is still challenging with existing methods to distinguish structures from texture details, and so preventing texture filtering. Considering that the textures on both sides of a structural edge always differ much from each other in appearances, we determine whether a pixel is on a structure edge by exploiting the appearance contrast between patches around the pixel, and further propose an efficient implementation method. We demonstrate that our proposed method is more effective than existing methods to distinguish structures from texture details, and our required patches for texture measurement can be smaller than the used patches in existing methods by at least half. Thus, we can improve texture filtering on both quality and efficiency, as shown by the experimental results, e.g., we can handle the textured images with a resolution of 800 × 600 pixels in real-time. (The code is available at https://github.com/hefengxiyulu/MLPC)</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ray Tracing Animated Displaced Micro-Meshes 光线追踪动画位移微切口
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15225
Holger Gruen, Carsten Benthin, Andrew Kensler, Joshua Barczak, David McAllister
{"title":"Ray Tracing Animated Displaced Micro-Meshes","authors":"Holger Gruen,&nbsp;Carsten Benthin,&nbsp;Andrew Kensler,&nbsp;Joshua Barczak,&nbsp;David McAllister","doi":"10.1111/cgf.15225","DOIUrl":"https://doi.org/10.1111/cgf.15225","url":null,"abstract":"<p>We present a new method that allows efficient ray tracing of virtually artefact-free animated displaced micro-meshes (DMMs) [MMT23] and preserves their low memory footprint and low BVH build and update cost. DMMs allow for compact representation of micro-triangle geometry through hierarchical encoding of displacements. Displacements are computed with respect to a coarse base mesh and are used to displace new vertices introduced during <i>1 : 4</i> subdivision of the base mesh. Applying non-rigid transformation to the base mesh can result in silhouette and normal artefacts (see Figure 1) during animation. We propose an approach which prevents these artefacts by interpolating transformation matrices before applying them to the DMM representation. Our interpolation-based algorithm does not change DMM data structures and it allows for efficient bounding of animated micro-triangle geometry which is essential for fast tessellation-free ray tracing of animated DMMs.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anisotropic Specular Image-Based Lighting Based on BRDF Major Axis Sampling 基于 BRDF 主轴采样的各向异性镜面图像照明
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15233
Giovanni Cocco, Cédric Zanni, Xavier Chermain
{"title":"Anisotropic Specular Image-Based Lighting Based on BRDF Major Axis Sampling","authors":"Giovanni Cocco,&nbsp;Cédric Zanni,&nbsp;Xavier Chermain","doi":"10.1111/cgf.15233","DOIUrl":"https://doi.org/10.1111/cgf.15233","url":null,"abstract":"<p>Anisotropic specular appearances are ubiquitous in the environment: brushed stainless steel pans, kettles, elevator walls, fur, or scratched plastics. Real-time rendering of these materials with image-based lighting is challenging due to the complex shape of the bidirectional reflectance distribution function (BRDF). We propose an anisotropic specular image-based lighting method that can serve as a drop-in replacement for the standard bent normal technique [Rev11]. Our method yields more realistic results with a 50% increase in computation time of the previous technique, using the same high dynamic range (HDR) preintegrated environment image. We use several environment samples positioned along the major axis of the specular microfacet BRDF. We derive an analytic formula to determine the two closest and two farthest points from the reflected direction on an approximation of the BRDF confidence region boundary. The two farthest points define the BRDF major axis, while the two closest points are used to approximate the BRDF width. The environment level of detail is derived from the BRDF width and the distance between the samples. We extensively compare our method with the bent normal technique and the ground truth using the GGX specular BRDF.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variable offsets and processing of implicit forms toward the adaptive synthesis and analysis of heterogeneous conforming microstructure 可变偏移和隐式处理,实现异质保形微结构的自适应合成和分析
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15224
Q. Y. Hong, P. Antolin, G. Elber, M.-S. Kim
{"title":"Variable offsets and processing of implicit forms toward the adaptive synthesis and analysis of heterogeneous conforming microstructure","authors":"Q. Y. Hong,&nbsp;P. Antolin,&nbsp;G. Elber,&nbsp;M.-S. Kim","doi":"10.1111/cgf.15224","DOIUrl":"https://doi.org/10.1111/cgf.15224","url":null,"abstract":"<p>The synthesis of porous, lattice, or microstructure geometries has captured the attention of many researchers in recent years. Implicit forms, such as triply periodic minimal surfaces (TPMS) has captured a significant attention, recently, as tiles in lattices, partially because implicit forms have the potential for synthesizing with ease more complex topologies of tiles, compared to parametric forms. In this work, we show how variable offsets of implicit forms could be used in lattice design as well as lattice analysis, while graded wall and edge thicknesses could be fully controlled in the lattice and even vary within a single tile. As a result, (geometrically) heterogeneous lattices could be created and adapted to follow analysis results while maintaining continuity between adjacent tiles. We demonstrate this ability on several 3D models, including TPMS.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MISNeR: Medical Implicit Shape Neural Representation for Image Volume Visualisation MISNeR:用于图像体积可视化的医学隐含形状神经表示法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15222
G. Jin, Y. Jung, L. Bi, J. Kim
{"title":"MISNeR: Medical Implicit Shape Neural Representation for Image Volume Visualisation","authors":"G. Jin,&nbsp;Y. Jung,&nbsp;L. Bi,&nbsp;J. Kim","doi":"10.1111/cgf.15222","DOIUrl":"https://doi.org/10.1111/cgf.15222","url":null,"abstract":"<p>Three-dimensional visualisation of mesh reconstruction of the medical images is commonly used for various clinical applications including pre / post-surgical planning. Such meshes are conventionally generated by extracting the surface from volumetric segmentation masks. Therefore, they have inherent limitations of staircase artefacts due to their anisotropic voxel dimensions. The time-consuming process for manual refinement to remove artefacts and/or the isolated regions further adds to these limitations. Methods for directly generating meshes from volumetric data by template deformation are often limited to simple topological structures, and methods that use implicit functions for continuous surfaces, do not achieve the level of mesh reconstruction accuracy when compared to segmentation-based methods. In this study, we address these limitations by combining the implicit function representation with a multi-level deep learning architecture. We introduce a novel multi-level local feature sampling component which leverages the spatial features for the implicit function regression to enhance the segmentation result. We further introduce a shape boundary estimator that accelerates the explicit mesh reconstruction by minimising the number of the signed distance queries during model inference. The result is a multi-level deep learning network that directly regresses the implicit function from medical image volumes to a continuous surface model, which can be used for mesh reconstruction from arbitrary high volume resolution to minimise staircase artefacts. We evaluated our method using pelvic computed tomography (CT) dataset from two public sources with varying z-axis resolutions. We show that our method minimised the staircase artefacts while achieving comparable results in surface accuracy when compared to the state-of-the-art segmentation algorithms. Furthermore, our method was 9 times faster in volume reconstruction than comparable implicit shape representation networks.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSH3D: 3D Representation via Fibonacci Spherical Harmonics FSH3D:通过斐波那契球面谐波进行 3D 表示
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15231
Zikuan Li, Anyi Huang, Wenru Jia, Qiaoyun Wu, Mingqiang Wei, Jun Wang
{"title":"FSH3D: 3D Representation via Fibonacci Spherical Harmonics","authors":"Zikuan Li,&nbsp;Anyi Huang,&nbsp;Wenru Jia,&nbsp;Qiaoyun Wu,&nbsp;Mingqiang Wei,&nbsp;Jun Wang","doi":"10.1111/cgf.15231","DOIUrl":"https://doi.org/10.1111/cgf.15231","url":null,"abstract":"<p>Spherical harmonics are a favorable technique for 3D representation, employing a frequency-based approach through the spherical harmonic transform (SHT). Typically, SHT is performed using equiangular sampling grids. However, these grids are non-uniform on spherical surfaces and exhibit local anisotropy, a common limitation in existing spherical harmonic decomposition methods. This paper proposes a 3D representation method using Fibonacci Spherical Harmonics (FSH3D). We introduce a spherical Fibonacci grid (SFG), which is more uniform than equiangular grids for SHT in the frequency domain. Our method employs analytical weights for SHT on SFG, effectively assigning sampling errors to spherical harmonic degrees higher than the recovered band-limited function. This provides a novel solution for spherical harmonic transformation on non-equiangular grids. The key advantages of our FSH3D method include: 1) With the same number of sampling points, SFG captures more features without bias compared to equiangular grids; 2) The root mean square error of 32-degree spherical harmonic coefficients is reduced by approximately 34.6% for SFG compared to equiangular grids; and 3) FSH3D offers more stable frequency domain representations, especially for rotating functions. FSH3D enhances the stability of frequency domain representations under rotational transformations. Its application in 3D shape reconstruction and 3D shape classification results in more accurate and robust representations. Our code is publicly available at https://github.com/Miraclelzk/Fibonacci-Spherical-Harmonics.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disk B-spline on 𝕊2: A Skeleton-based Region Representation on 2-Sphere ᵔ2上的圆盘B样条曲线:基于骨架的2球面区域表示法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15239
Chunhao Zheng, Yuming Zhao, Zhongke Wu, Xingce Wang
{"title":"Disk B-spline on 𝕊2: A Skeleton-based Region Representation on 2-Sphere","authors":"Chunhao Zheng,&nbsp;Yuming Zhao,&nbsp;Zhongke Wu,&nbsp;Xingce Wang","doi":"10.1111/cgf.15239","DOIUrl":"https://doi.org/10.1111/cgf.15239","url":null,"abstract":"<p>Due to the widespread applications of 2-dimensional spherical designs, there has been an increasing requirement of modeling on the 𝕊<sup>2</sup> manifold in recent years. Due to the non-Euclidean nature of the sphere, it has some challenges to find a method to represent 2D regions on 𝕊<sup>2</sup> manifold. In this paper, a skeleton-based representation method of regions on 𝕊<sup>2</sup>, disk B-spline(DBSC) on 𝕊<sup>2</sup> is proposed. Firstly, we give the definition and basic algorithms of DBSC on 𝕊<sup>2</sup>. Then we provide the calculation method of DBSC on 𝕊<sup>2</sup>, which includes calculating the boundary points, internal points and their corresponding derivatives. Based on that, we give some modeling methods of DBSC on 𝕊<sup>2</sup>, including approximation, deformation. In the end, some stunning application examples of DBSC on 𝕊<sup>2</sup> are shown. This work lays a theoretical foundation for further applications of DBSC on 𝕊<sup>2</sup>.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement 探索快速灵活的零镜头低照度图像/视频增强技术
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15210
Xianjun Han, Taoli Bao, Hongyu Yang
{"title":"Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement","authors":"Xianjun Han,&nbsp;Taoli Bao,&nbsp;Hongyu Yang","doi":"10.1111/cgf.15210","DOIUrl":"https://doi.org/10.1111/cgf.15210","url":null,"abstract":"<p>Low-light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image-to-image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero-shot, low-light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well-designed network. The proposed zero-shot, low-light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low-light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state-of-the-art methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信