Computer Graphics Forum最新文献

筛选
英文 中文
Variable offsets and processing of implicit forms toward the adaptive synthesis and analysis of heterogeneous conforming microstructure 可变偏移和隐式处理,实现异质保形微结构的自适应合成和分析
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15224
Q. Y. Hong, P. Antolin, G. Elber, M.-S. Kim
{"title":"Variable offsets and processing of implicit forms toward the adaptive synthesis and analysis of heterogeneous conforming microstructure","authors":"Q. Y. Hong,&nbsp;P. Antolin,&nbsp;G. Elber,&nbsp;M.-S. Kim","doi":"10.1111/cgf.15224","DOIUrl":"https://doi.org/10.1111/cgf.15224","url":null,"abstract":"<p>The synthesis of porous, lattice, or microstructure geometries has captured the attention of many researchers in recent years. Implicit forms, such as triply periodic minimal surfaces (TPMS) has captured a significant attention, recently, as tiles in lattices, partially because implicit forms have the potential for synthesizing with ease more complex topologies of tiles, compared to parametric forms. In this work, we show how variable offsets of implicit forms could be used in lattice design as well as lattice analysis, while graded wall and edge thicknesses could be fully controlled in the lattice and even vary within a single tile. As a result, (geometrically) heterogeneous lattices could be created and adapted to follow analysis results while maintaining continuity between adjacent tiles. We demonstrate this ability on several 3D models, including TPMS.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MISNeR: Medical Implicit Shape Neural Representation for Image Volume Visualisation MISNeR:用于图像体积可视化的医学隐含形状神经表示法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15222
G. Jin, Y. Jung, L. Bi, J. Kim
{"title":"MISNeR: Medical Implicit Shape Neural Representation for Image Volume Visualisation","authors":"G. Jin,&nbsp;Y. Jung,&nbsp;L. Bi,&nbsp;J. Kim","doi":"10.1111/cgf.15222","DOIUrl":"https://doi.org/10.1111/cgf.15222","url":null,"abstract":"<p>Three-dimensional visualisation of mesh reconstruction of the medical images is commonly used for various clinical applications including pre / post-surgical planning. Such meshes are conventionally generated by extracting the surface from volumetric segmentation masks. Therefore, they have inherent limitations of staircase artefacts due to their anisotropic voxel dimensions. The time-consuming process for manual refinement to remove artefacts and/or the isolated regions further adds to these limitations. Methods for directly generating meshes from volumetric data by template deformation are often limited to simple topological structures, and methods that use implicit functions for continuous surfaces, do not achieve the level of mesh reconstruction accuracy when compared to segmentation-based methods. In this study, we address these limitations by combining the implicit function representation with a multi-level deep learning architecture. We introduce a novel multi-level local feature sampling component which leverages the spatial features for the implicit function regression to enhance the segmentation result. We further introduce a shape boundary estimator that accelerates the explicit mesh reconstruction by minimising the number of the signed distance queries during model inference. The result is a multi-level deep learning network that directly regresses the implicit function from medical image volumes to a continuous surface model, which can be used for mesh reconstruction from arbitrary high volume resolution to minimise staircase artefacts. We evaluated our method using pelvic computed tomography (CT) dataset from two public sources with varying z-axis resolutions. We show that our method minimised the staircase artefacts while achieving comparable results in surface accuracy when compared to the state-of-the-art segmentation algorithms. Furthermore, our method was 9 times faster in volume reconstruction than comparable implicit shape representation networks.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stress-Aligned Hexahedral Lattice Structures
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-28 DOI: 10.1111/cgf.15265
D. R. Bukenberger, J. Wang, J. Wu, R. Westermann
{"title":"Stress-Aligned Hexahedral Lattice Structures","authors":"D. R. Bukenberger,&nbsp;J. Wang,&nbsp;J. Wu,&nbsp;R. Westermann","doi":"10.1111/cgf.15265","DOIUrl":"https://doi.org/10.1111/cgf.15265","url":null,"abstract":"<p>Maintaining the maximum stiffness of components with as little material as possible is an overarching objective in computational design and engineering. It is well-established that in stiffness-optimal designs, material is aligned with orthogonal principal stress directions. In the limit of material volume, this alignment forms micro-structures resembling quads or hexahedra. Achieving a globally consistent layout of such orthogonal micro-structures presents a significant challenge, particularly in three-dimensional settings. In this paper, we propose a novel geometric algorithm for compiling stress-aligned hexahedral lattice structures. Our method involves deforming an input mesh under load to align the resulting stress field along an orthogonal basis. The deformed object is filled with a hexahedral grid, and the deformation is reverted to recover the original shape. The resulting stress-aligned mesh is used as basis for a final hollowing procedure, generating a volume-reduced stiff infill composed of hexahedral micro-structures. We perform quantitative comparisons with structural optimization and hexahedral meshing approaches and demonstrate the superior mechanical performance of our designs with finite element simulation experiments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Learning-Based Facial Retargeting Using Local Patches
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-25 DOI: 10.1111/cgf.15263
Yeonsoo Choi, Inyup Lee, Sihun Cha, Seonghyeon Kim, Sunjin Jung, Junyong Noh
{"title":"Deep-Learning-Based Facial Retargeting Using Local Patches","authors":"Yeonsoo Choi,&nbsp;Inyup Lee,&nbsp;Sihun Cha,&nbsp;Seonghyeon Kim,&nbsp;Sunjin Jung,&nbsp;Junyong Noh","doi":"10.1111/cgf.15263","DOIUrl":"https://doi.org/10.1111/cgf.15263","url":null,"abstract":"<p>In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch-based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re-enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15263","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSH3D: 3D Representation via Fibonacci Spherical Harmonics FSH3D:通过斐波那契球面谐波进行 3D 表示
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15231
Zikuan Li, Anyi Huang, Wenru Jia, Qiaoyun Wu, Mingqiang Wei, Jun Wang
{"title":"FSH3D: 3D Representation via Fibonacci Spherical Harmonics","authors":"Zikuan Li,&nbsp;Anyi Huang,&nbsp;Wenru Jia,&nbsp;Qiaoyun Wu,&nbsp;Mingqiang Wei,&nbsp;Jun Wang","doi":"10.1111/cgf.15231","DOIUrl":"https://doi.org/10.1111/cgf.15231","url":null,"abstract":"<p>Spherical harmonics are a favorable technique for 3D representation, employing a frequency-based approach through the spherical harmonic transform (SHT). Typically, SHT is performed using equiangular sampling grids. However, these grids are non-uniform on spherical surfaces and exhibit local anisotropy, a common limitation in existing spherical harmonic decomposition methods. This paper proposes a 3D representation method using Fibonacci Spherical Harmonics (FSH3D). We introduce a spherical Fibonacci grid (SFG), which is more uniform than equiangular grids for SHT in the frequency domain. Our method employs analytical weights for SHT on SFG, effectively assigning sampling errors to spherical harmonic degrees higher than the recovered band-limited function. This provides a novel solution for spherical harmonic transformation on non-equiangular grids. The key advantages of our FSH3D method include: 1) With the same number of sampling points, SFG captures more features without bias compared to equiangular grids; 2) The root mean square error of 32-degree spherical harmonic coefficients is reduced by approximately 34.6% for SFG compared to equiangular grids; and 3) FSH3D offers more stable frequency domain representations, especially for rotating functions. FSH3D enhances the stability of frequency domain representations under rotational transformations. Its application in 3D shape reconstruction and 3D shape classification results in more accurate and robust representations. Our code is publicly available at https://github.com/Miraclelzk/Fibonacci-Spherical-Harmonics.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disk B-spline on 𝕊2: A Skeleton-based Region Representation on 2-Sphere ᵔ2上的圆盘B样条曲线:基于骨架的2球面区域表示法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15239
Chunhao Zheng, Yuming Zhao, Zhongke Wu, Xingce Wang
{"title":"Disk B-spline on 𝕊2: A Skeleton-based Region Representation on 2-Sphere","authors":"Chunhao Zheng,&nbsp;Yuming Zhao,&nbsp;Zhongke Wu,&nbsp;Xingce Wang","doi":"10.1111/cgf.15239","DOIUrl":"https://doi.org/10.1111/cgf.15239","url":null,"abstract":"<p>Due to the widespread applications of 2-dimensional spherical designs, there has been an increasing requirement of modeling on the 𝕊<sup>2</sup> manifold in recent years. Due to the non-Euclidean nature of the sphere, it has some challenges to find a method to represent 2D regions on 𝕊<sup>2</sup> manifold. In this paper, a skeleton-based representation method of regions on 𝕊<sup>2</sup>, disk B-spline(DBSC) on 𝕊<sup>2</sup> is proposed. Firstly, we give the definition and basic algorithms of DBSC on 𝕊<sup>2</sup>. Then we provide the calculation method of DBSC on 𝕊<sup>2</sup>, which includes calculating the boundary points, internal points and their corresponding derivatives. Based on that, we give some modeling methods of DBSC on 𝕊<sup>2</sup>, including approximation, deformation. In the end, some stunning application examples of DBSC on 𝕊<sup>2</sup> are shown. This work lays a theoretical foundation for further applications of DBSC on 𝕊<sup>2</sup>.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement 探索快速灵活的零镜头低照度图像/视频增强技术
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15210
Xianjun Han, Taoli Bao, Hongyu Yang
{"title":"Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement","authors":"Xianjun Han,&nbsp;Taoli Bao,&nbsp;Hongyu Yang","doi":"10.1111/cgf.15210","DOIUrl":"https://doi.org/10.1111/cgf.15210","url":null,"abstract":"<p>Low-light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image-to-image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero-shot, low-light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well-designed network. The proposed zero-shot, low-light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low-light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state-of-the-art methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Parametrization Method for B-Spline Curve Interpolation via Supervised Learning 通过监督学习实现 B-样条曲线插值的混合参数化方法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15240
Tianyu Song, Tong Shen, Linlin Ge, Jieqing Feng
{"title":"A Hybrid Parametrization Method for B-Spline Curve Interpolation via Supervised Learning","authors":"Tianyu Song,&nbsp;Tong Shen,&nbsp;Linlin Ge,&nbsp;Jieqing Feng","doi":"10.1111/cgf.15240","DOIUrl":"https://doi.org/10.1111/cgf.15240","url":null,"abstract":"<p>B-spline curve interpolation is a fundamental algorithm in computer-aided geometric design. Determining suitable parameters based on data points distribution has always been an important issue for high-quality interpolation curves generation. Various parameterization methods have been proposed. However, there is no universally satisfactory method that is applicable to data points with diverse distributions. In this work, a hybrid parametrization method is proposed to overcome the problem. For a given set of data points, a classifier via supervised learning identifies an optimal local parameterization method based on the local geometric distribution of four adjacent data points, and the optimal local parameters are computed using the selected optimal local parameterization method for the four adjacent data points. Then a merging method is employed to calculate global parameters which align closely with the local parameters. Experiments demonstrate that the proposed hybrid parameterization method well adapts the different distributions of data points statistically. The proposed method has a flexible and scalable framework, which can includes current and potential new parameterization methods as its components.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLTScene: Global-to-Local Transformers for Indoor Scene Synthesis with General Room Boundaries GLTScene:用于具有一般房间边界的室内场景合成的全局到局部变换器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15236
Yijie Li, Pengfei Xu, Junquan Ren, Zefan Shao, Hui Huang
{"title":"GLTScene: Global-to-Local Transformers for Indoor Scene Synthesis with General Room Boundaries","authors":"Yijie Li,&nbsp;Pengfei Xu,&nbsp;Junquan Ren,&nbsp;Zefan Shao,&nbsp;Hui Huang","doi":"10.1111/cgf.15236","DOIUrl":"https://doi.org/10.1111/cgf.15236","url":null,"abstract":"<p>We present GLTScene, a novel data-driven method for high-quality furniture layout synthesis with general room boundaries as conditions. This task is challenging since the existing indoor scene datasets do not cover the variety of general room boundaries. We incorporate the interior design principles with learning techniques and adopt a global-to-local strategy for this task. Globally, we learn the placement of furniture objects from the datasets without considering their alignment. Locally, we learn the alignment of furniture objects relative to their nearest walls, according to the alignment principle in interior design. The global placement and local alignment of furniture objects are achieved by two transformers respectively. We compare our method with several baselines in the task of furniture layout synthesis with general room boundaries as conditions. Our method outperforms these baselines both quantitatively and qualitatively. We also demonstrate that our method can achieve other conditional layout synthesis tasks, including object-level conditional generation and attribute-level conditional generation. The code is publicly available at https://github.com/WWalter-Lee/GLTScene.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoupNeRF: Property-aware Neural Radiance Fields for Multi-Material Coupled Scenario Reconstruction CoupNeRF:用于多材料耦合场景重构的属性感知神经辐射场
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15208
Jin Li, Yang Gao, Wenfeng Song, Yacong Li, Shuai Li, Aimin Hao, Hong Qin
{"title":"CoupNeRF: Property-aware Neural Radiance Fields for Multi-Material Coupled Scenario Reconstruction","authors":"Jin Li,&nbsp;Yang Gao,&nbsp;Wenfeng Song,&nbsp;Yacong Li,&nbsp;Shuai Li,&nbsp;Aimin Hao,&nbsp;Hong Qin","doi":"10.1111/cgf.15208","DOIUrl":"https://doi.org/10.1111/cgf.15208","url":null,"abstract":"<p>Neural Radiance Fields (NeRFs) have achieved significant recognition for their proficiency in scene reconstruction and rendering by utilizing neural networks to depict intricate volumetric environments. Despite considerable research dedicated to reconstructing physical scenes, rare works succeed in challenging scenarios involving dynamic, multi-material objects. To alleviate, we introduce CoupNeRF, an efficient neural network architecture that is aware of multiple material properties. This architecture combines physically grounded continuum mechanics with NeRF, facilitating the identification of motion systems across a wide range of physical coupling scenarios. We first reconstruct specific-material of objects within 3D physical fields to learn material parameters. Then, we develop a method to model the neighbouring particles, enhancing the learning process specifically in regions where material transitions occur. The effectiveness of CoupNeRF is demonstrated through extensive experiments, showcasing its proficiency in accurately coupling and identifying the behavior of complex physical scenes that span multiple physics domains.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信