ACM Transactions on Graphics最新文献

筛选
英文 中文
Neural Differential Appearance Equations 神经差分方程
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687900
Chen Liu, Tobias Ritschel
{"title":"Neural Differential Appearance Equations","authors":"Chen Liu, Tobias Ritschel","doi":"10.1145/3687900","DOIUrl":"https://doi.org/10.1145/3687900","url":null,"abstract":"We propose a method to reproduce dynamic appearance textures with space-stationary but time-varying visual statistics. While most previous work decomposes dynamic textures into static appearance and motion, we focus on dynamic appearance that results not from motion but variations of fundamental properties, such as rusting, decaying, melting, and weathering. To this end, we adopt the neural ordinary differential equation (ODE) to learn the underlying dynamics of appearance from a target exemplar. We simulate the ODE in two phases. At the \"warm-up\" phase, the ODE diffuses a random noise to an initial state. We then constrain the further evolution of this ODE to replicate the evolution of visual feature statistics in the exemplar during the generation phase. The particular innovation of this work is the neural ODE achieving both denoising and evolution for dynamics synthesis, with a proposed temporal training scheme. We study both relightable (BRDF) and non-relightable (RGB) appearance models. For both we introduce new pilot datasets, allowing, for the first time, to study such phenomena: For RGB we provide 22 dynamic textures acquired from free online sources; For BRDFs, we further acquire a dataset of 21 flash-lit videos of time-varying materials, enabled by a simple-to-construct setup. Our experiments show that our method consistently yields realistic and coherent results, whereas prior works falter under pronounced temporal appearance variations. A user study confirms our approach is preferred to previous work for such exemplars.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"53 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential Walk on Spheres 球面微分行走
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687913
Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas
{"title":"Differential Walk on Spheres","authors":"Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas","doi":"10.1145/3687913","DOIUrl":"https://doi.org/10.1145/3687913","url":null,"abstract":"We introduce a Monte Carlo method for computing derivatives of the solution to a partial differential equation (PDE) with respect to problem parameters (such as domain geometry or boundary conditions). Derivatives can be evaluated at arbitrary points, without performing a global solve or constructing a volumetric grid or mesh. The method is hence well suited to inverse problems with complex geometry, such as PDE-constrained shape optimization. Like other <jats:italic>walk on spheres (WoS)</jats:italic> algorithms, our method is trivial to parallelize, and is agnostic to boundary representation (meshes, splines, implicit surfaces, <jats:italic>etc.</jats:italic> ), supporting large topological changes. We focus in particular on screened Poisson equations, which model diverse problems from scientific and geometric computing. As in differentiable rendering, we jointly estimate derivatives with respect to all parameters---hence, cost does not grow significantly with parameter count. In practice, even noisy derivative estimates exhibit fast, stable convergence for stochastic gradient-based optimization, as we show through examples from thermal design, shape from diffusion, and computer graphics.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"39 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhysFiT: Physical-aware 3D Shape Understanding for Finishing Incomplete Assembly PhysFiT:物理感知三维形状理解,用于完成不完整装配
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-10-29 DOI: 10.1145/3702226
Weihao Wang, Mingyu You, Hongjun Zhou, Bin He
{"title":"PhysFiT: Physical-aware 3D Shape Understanding for Finishing Incomplete Assembly","authors":"Weihao Wang, Mingyu You, Hongjun Zhou, Bin He","doi":"10.1145/3702226","DOIUrl":"https://doi.org/10.1145/3702226","url":null,"abstract":"Understanding the part composition and structure of 3D shapes is crucial for a wide range of 3D applications, including 3D part assembly and 3D assembly completion. Compared to 3D part assembly, 3D assembly completion is more complicated which involves repairing broken or incomplete furniture that miss several parts with a toolkit. The primary challenge persists in how to reveal the potential part relations to infer the absent parts from multiple indistinguishable candidates with similar geometries, and complete for well-connected, structurally stable and aesthetically pleasing assemblies. This task necessitates not only specialized knowledge of part composition but, more importantly, an awareness of physical constraints, <jats:italic>i.e.</jats:italic> , connectivity, stability, and symmetry. Neglecting these constraints often results in assemblies that, although visually plausible, are impractical. To address this challenge, we propose PhysFiT, a physical-aware 3D shape understanding framework. This framework is built upon attention-based part relation modeling and incorporates connection modeling, simulation-free stability optimization and symmetric transformation consistency. We evaluate its efficacy on 3D part assembly and 3D assembly completion, a novel assembly task presented in this work. Extensive experiments demonstrate the effectiveness of PhysFiT in constructing geometrically sound and physically compliant assemblies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"17 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142541706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synchronized tracing of primitive-based implicit volumes 基于基元的隐式卷的同步追踪
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-10-28 DOI: 10.1145/3702227
Cédric Zanni
{"title":"Synchronized tracing of primitive-based implicit volumes","authors":"Cédric Zanni","doi":"10.1145/3702227","DOIUrl":"https://doi.org/10.1145/3702227","url":null,"abstract":"Implicit volumes are known for their ability to represent smooth shapes of arbitrary topology thanks to hierarchical combinations of primitives using a structure called a blobtree. We present a new tile-based rendering pipeline well suited for modeling scenarios, i.e., no preprocessing is required when primitive parameters are updated. When using approximate signed distance fields (fields with Lipschitz bound close to 1), we rely on compact, smooth CSG operators - extended from standard bounded operators - to compute a tight augmented bounding volume for all primitives of the blobtree. The pipeline relies on a low-resolution A-buffer storing the primitives of interest of a given screen tile. The A-buffer is then used during ray processing to synchronize threads within a subfrustum. This allows coherent field evaluation within workgroups. We use a sparse bottom-up tree traversal to prune the blobtree on-the-fly which allows us to decorrelate field evaluation complexity from the full blobtree size. The ray processing itself is done using the sphere tracing algorithm. The pipeline scales well to volumes consisting of thousands of primitives.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"6 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142536810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TriHuman : A Real-time and Controllable Tri-plane Representation for Detailed Human Geometry and Appearance Synthesis TriHuman:用于详细人体几何和外观合成的实时可控三平面表示法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-09-24 DOI: 10.1145/3697140
Heming Zhu, Fangneng Zhan, Christian Theobalt, Marc Habermann
{"title":"TriHuman : A Real-time and Controllable Tri-plane Representation for Detailed Human Geometry and Appearance Synthesis","authors":"Heming Zhu, Fangneng Zhan, Christian Theobalt, Marc Habermann","doi":"10.1145/3697140","DOIUrl":"https://doi.org/10.1145/3697140","url":null,"abstract":"Creating controllable, photorealistic, and geometrically detailed digital doubles of real humans solely from video data is a key challenge in Computer Graphics and Vision, especially when real-time performance is required. Recent methods attach a neural radiance field (NeRF) to an articulated structure, e.g., a body model or a skeleton, to map points into a pose canonical space while conditioning the NeRF on the skeletal pose. These approaches typically parameterize the neural field with a multi-layer perceptron (MLP) leading to a slow runtime. To address this drawback, we propose <jats:italic>TriHuman</jats:italic> a novel human-tailored, deformable, and efficient tri-plane representation, which achieves real-time performance, state-of-the-art pose-controllable geometry synthesis as well as photorealistic rendering quality. At the core, we non-rigidly warp global ray samples into our undeformed tri-plane texture space, which effectively addresses the problem of global points being mapped to the same tri-plane locations. We then show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes. Our results demonstrate a clear step towards higher quality in terms of geometry and appearance modeling of humans as well as runtime performance.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"4 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAMO: A Deep Solver for Arbitrary Marker Configuration in Optical Motion Capture DAMO:光学运动捕捉中任意标记配置的深度求解器
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-09-14 DOI: 10.1145/3695865
KyeongMin Kim, SeungWon Seo, DongHeun Han, HyeongYeop Kang
{"title":"DAMO: A Deep Solver for Arbitrary Marker Configuration in Optical Motion Capture","authors":"KyeongMin Kim, SeungWon Seo, DongHeun Han, HyeongYeop Kang","doi":"10.1145/3695865","DOIUrl":"https://doi.org/10.1145/3695865","url":null,"abstract":"Marker-based optical motion capture (mocap) systems are increasingly utilized for acquiring 3D human motion, offering advantages in capturing the subtle nuances of human movement, style consistency, and ease of obtaining desired motion. Motion data acquisition via mocap typically requires laborious marker labeling and motion reconstruction, recent deep-learning solutions have aimed to automate the process. However, such solutions generally presuppose a fixed marker configuration to reduce learning complexity, thereby limiting flexibility. To overcome the limitation, we introduce DAMO, an end-to-end deep solver, proficiently inferring arbitrary marker configurations and optimizing pose reconstruction. DAMO outperforms state-of-the-art like SOMA and MoCap-Solver in scenarios with significant noise and unknown marker configurations. We expect that DAMO will meet various practical demands such as facilitating dynamic marker configuration adjustments during capture sessions, processing marker clouds irrespective of whether they employ mixed or entirely unknown marker configurations, and allowing custom marker configurations to suit distinct capture scenarios. DAMO code and pretrained models are available at <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/CritBear/damo\">https://github.com/CritBear/damo</jats:ext-link> .","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"19 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RNA: Relightable Neural Assets RNA:可重燃的神经资产
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-09-13 DOI: 10.1145/3695866
Krishna Mullia, Fujun Luan, Xin Sun, Miloš Hašan
{"title":"RNA: Relightable Neural Assets","authors":"Krishna Mullia, Fujun Luan, Xin Sun, Miloš Hašan","doi":"10.1145/3695866","DOIUrl":"https://doi.org/10.1145/3695866","url":null,"abstract":"High-fidelity 3D assets with materials composed of fibers (including hair), complex layered material shaders, or fine scattering geometry are critical in high-end realistic rendering applications. Rendering such models is computationally expensive due to heavy shaders and long scattering paths. Moreover, implementing the shading and scattering models is non-trivial and has to be done not only in the 3D content authoring software (which is necessarily complex), but also in all downstream rendering solutions. For example, web and mobile viewers for complex 3D assets are desirable, but frequently cannot support the full shading complexity allowed by the authoring application. Our goal is to design a neural representation for 3D assets with complex shading that supports full relightability and full integration into existing renderers. We provide an end-to-end shading solution at the first intersection of a ray with the underlying geometry. All shading and scattering is precomputed and included in the neural asset; no multiple scattering paths need to be traced, and no complex shading models need to be implemented to render our assets, beyond a single neural architecture. We combine an MLP decoder with a feature grid. Shading consists of querying a feature vector, followed by an MLP evaluation producing the final reflectance value. Our method provides high-fidelity shading, close to the ground-truth Monte Carlo estimate even at close-up views. We believe our neural assets could be used in practical renderers, providing significant speed-ups and simplifying renderer implementations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"27 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speed-Aware Audio-Driven Speech Animation using Adaptive Windows 使用自适应窗口的速度感知音频驱动语音动画
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-08-31 DOI: 10.1145/3691341
Sunjin Jung, Yeongho Seol, Kwanggyoon Seo, Hyeonho Na, Seonghyeon Kim, Vanessa Tan, Junyong Noh
{"title":"Speed-Aware Audio-Driven Speech Animation using Adaptive Windows","authors":"Sunjin Jung, Yeongho Seol, Kwanggyoon Seo, Hyeonho Na, Seonghyeon Kim, Vanessa Tan, Junyong Noh","doi":"10.1145/3691341","DOIUrl":"https://doi.org/10.1145/3691341","url":null,"abstract":"We present a novel method that can generate realistic speech animations of a 3D face from audio using multiple adaptive windows. In contrast to previous studies that use a fixed size audio window, our method accepts an adaptive audio window as input, reflecting the audio speaking rate to use consistent phonemic information. Our system consists of three parts. First, the speaking rate is estimated from the input audio using a neural network trained in a self-supervised manner. Second, the appropriate window size that encloses the audio features is predicted adaptively based on the estimated speaking rate. Another key element lies in the use of multiple audio windows of different sizes as input to the animation generator: a small window to concentrate on detailed information and a large window to consider broad phonemic information near the center frame. Finally, the speech animation is generated from the multiple adaptive audio windows. Our method can generate realistic speech animations from in-the-wild audios at any speaking rate, i.e., fast raps, slow songs, as well as normal speech. We demonstrate via extensive quantitative and qualitative evaluations including a user study that our method outperforms state-of-the-art approaches.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"32 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ControlMat: A Controlled Generative Approach to Material Capture 控制垫:材料捕捉的受控生成方法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-08-27 DOI: 10.1145/3688830
Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, Tamy Boubekeur
{"title":"ControlMat: A Controlled Generative Approach to Material Capture","authors":"Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, Tamy Boubekeur","doi":"10.1145/3688830","DOIUrl":"https://doi.org/10.1145/3688830","url":null,"abstract":"Material reconstruction from a photograph is a key component of 3D content creation democratization. We propose to formulate this ill-posed problem as a controlled synthesis one, leveraging the recent progress in generative deep networks. We present ControlMat, a method which, given a single photograph with uncontrolled illumination as input, conditions a diffusion model to generate plausible, tileable, high-resolution physically-based digital materials. We carefully analyze the behavior of diffusion models for multi-channel outputs, adapt the sampling process to fuse multi-scale information and introduce rolled diffusion to enable both tileability and patched diffusion for high-resolution outputs. Our generative approach further permits exploration of a variety of materials that could correspond to the input image, mitigating the unknown lighting conditions. We show that our approach outperforms recent inference and latent-space optimization methods, and we carefully validate our diffusion process design choices. <jats:xref ref-type=\"fn\"> <jats:sup>1</jats:sup> </jats:xref>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"17 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Closest Point Method for PDEs on Manifolds with Interior Boundary Conditions for Geometry Processing 用于几何处理的具有内部边界条件的曲面上 PDE 的最邻近点方法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-06-17 DOI: 10.1145/3673652
Nathan King, Haozhe Su, Mridul Aanjaneya, Steven Ruuth, Christopher Batty
{"title":"A Closest Point Method for PDEs on Manifolds with Interior Boundary Conditions for Geometry Processing","authors":"Nathan King, Haozhe Su, Mridul Aanjaneya, Steven Ruuth, Christopher Batty","doi":"10.1145/3673652","DOIUrl":"https://doi.org/10.1145/3673652","url":null,"abstract":"<p>Many geometry processing techniques require the solution of partial differential equations (PDEs) on manifolds embedded in (mathbb {R}^2 ) or (mathbb {R}^3 ), such as curves or surfaces. Such <i>manifold PDEs</i> often involve boundary conditions (e.g., Dirichlet or Neumann) prescribed at points or curves on the manifold’s interior or along the geometric (exterior) boundary of an open manifold. However, input manifolds can take many forms (e.g., triangle meshes, parametrizations, point clouds, implicit functions, etc.). Typically, one must generate a mesh to apply finite element-type techniques or derive specialized discretization procedures for each distinct manifold representation. We propose instead to address such problems in a unified manner through a novel extension of the <i>closest point method</i> (CPM) to handle interior boundary conditions. CPM solves the manifold PDE by solving a volumetric PDE defined over the Cartesian embedding space containing the manifold, and requires only a closest point representation of the manifold. Hence, CPM supports objects that are open or closed, orientable or not, and of any codimension. To enable support for interior boundary conditions we derive a method that implicitly partitions the embedding space across interior boundaries. CPM’s finite difference and interpolation stencils are adapted to respect this partition while preserving second-order accuracy. Additionally, we develop an efficient sparse-grid implementation and numerical solver that can scale to tens of millions of degrees of freedom, allowing PDEs to be solved on more complex manifolds. We demonstrate our method’s convergence behaviour on selected model PDEs and explore several geometry processing problems: diffusion curves on surfaces, geodesic distance, tangent vector field design, harmonic map construction, and reaction-diffusion textures. Our proposed approach thus offers a powerful and flexible new tool for a range of geometry processing tasks on general manifold representations.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"12 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141333673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信