Graphical Models最新文献

筛选
英文 中文
Improving the area-preserving parameterization of rational Bézier surfaces by rational bilinear transformation 利用有理双线性变换改进有理bsamzier曲面的保面积参数化
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-07-03 DOI: 10.1016/j.gmod.2025.101278
Xiaowei Li , Yingjie Wu , Yaohui Sun , Xin Chen , Yanru Chen , Yi-jun Yang
{"title":"Improving the area-preserving parameterization of rational Bézier surfaces by rational bilinear transformation","authors":"Xiaowei Li ,&nbsp;Yingjie Wu ,&nbsp;Yaohui Sun ,&nbsp;Xin Chen ,&nbsp;Yanru Chen ,&nbsp;Yi-jun Yang","doi":"10.1016/j.gmod.2025.101278","DOIUrl":"10.1016/j.gmod.2025.101278","url":null,"abstract":"<div><div>To improve the area-preserving parameterization quality of rational Bézier surfaces, an optimization algorithm using bilinear reparameterization is proposed. First, the rational Bézier surface is transformed using a rational bilinear transformation, which provides greater degrees of freedom compared to Möbius transformations, while preserving the rational Bézier representation. Then, the energy function is discretized using the composite Simpson’s rule, and its gradients are computed for optimization. Finally, the optimal rational bilinear transformation is determined using the L-BFGS method. Experimental results are presented to demonstrate the reparameterization effects through the circle-packing texture map, iso-parametric curve net, and color-coded images of APP energy in the proposed approach.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101278"},"PeriodicalIF":2.5,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144534220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point cloud geometry compression based on the combination of interlayer residual and IRN concatenated residual 基于层间残差与IRN级联残差结合的点云几何压缩
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-27 DOI: 10.1016/j.gmod.2025.101279
Meng Huang, Qian Xu, Wenxuan Xu
{"title":"Point cloud geometry compression based on the combination of interlayer residual and IRN concatenated residual","authors":"Meng Huang,&nbsp;Qian Xu,&nbsp;Wenxuan Xu","doi":"10.1016/j.gmod.2025.101279","DOIUrl":"10.1016/j.gmod.2025.101279","url":null,"abstract":"<div><div>Point clouds have been attracting more and more attentions due to its capability of representing objects precisely, such as autonomous vehicle navigation, VR/AR, cultural heritage protection, etc. However, the enormous amount of data carried in point clouds presents significant challenges for transmission and storage. To solve this problem, this dissertation presents a point cloud compression framework based on the combination of interlayer residual and IRN concatenated residual. This paper deployed upsampling design after downsampled point cloud data. It calculates the residuals among point cloud data through downsampling and upsampling processes, consequently, maintains accuracy and reduces errors within the downsampling process. In addition, a novel Inception ResNet-Concatenated Residual Module is designed for maintaining the spatial correlation between layers and blocks. At the same time, it can extract the global and detailed features within point cloud data. Besides, Attention Module is dedicated to enhance the focus on salient features. Respectively compared with the traditional (G-PCC) and the learning point cloud compression method (PCGC v2), this paper lists a series of solid experiments data proving a 70% to 90% and a 6% to 9% BD-Rate gains on 8iVFB and Owlii datasets.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101279"},"PeriodicalIF":2.5,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L2-GNN: Graph neural networks with fast spectral filters using twice linear parameterization L2-GNN:使用两次线性参数化的快速光谱滤波的图神经网络
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-26 DOI: 10.1016/j.gmod.2025.101276
Siying Huang , Xin Yang , Zhengda Lu , Hongxing Qin , Huaiwen Zhang , Yiqun Wang
{"title":"L2-GNN: Graph neural networks with fast spectral filters using twice linear parameterization","authors":"Siying Huang ,&nbsp;Xin Yang ,&nbsp;Zhengda Lu ,&nbsp;Hongxing Qin ,&nbsp;Huaiwen Zhang ,&nbsp;Yiqun Wang","doi":"10.1016/j.gmod.2025.101276","DOIUrl":"10.1016/j.gmod.2025.101276","url":null,"abstract":"<div><div>To improve learning on irregular 3D shapes, such as meshes with varying discretizations and point clouds with different samplings, we propose L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN, a new graph neural network that approximates the spectral filters using twice linear parameterization. First, we parameterize the spectral filters using wavelet filter basis functions. The parameterization allows for an enlarged receptive field of graph convolutions, which can simultaneously capture low-frequency and high-frequency information. Second, we parameterize the wavelet filter basis functions using Chebyshev polynomial basis functions. This parameterization reduces the computational complexity of graph convolutions while maintaining robustness to the change of mesh discretization and point cloud sampling. Our L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN based on the fast spectral filter can be used for shape correspondence, classification, and segmentation tasks on non-regular mesh or point cloud data. Experimental results show that our method outperforms the current state of the art in terms of both quality and efficiency.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101276"},"PeriodicalIF":2.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RS-SpecSDF: Reflection-supervised surface reconstruction and material estimation for specular indoor scenes RS-SpecSDF:镜面室内场景的反射监督表面重建和材料估计
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-25 DOI: 10.1016/j.gmod.2025.101277
Dong-Yu Chen, Hao-Xiang Chen, Qun-Ce Xu, Tai-Jiang Mu
{"title":"RS-SpecSDF: Reflection-supervised surface reconstruction and material estimation for specular indoor scenes","authors":"Dong-Yu Chen,&nbsp;Hao-Xiang Chen,&nbsp;Qun-Ce Xu,&nbsp;Tai-Jiang Mu","doi":"10.1016/j.gmod.2025.101277","DOIUrl":"10.1016/j.gmod.2025.101277","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) has achieved impressive 3D reconstruction quality using implicit scene representations. However, planar specular reflections pose significant challenges in the 3D reconstruction task. It is a common practice to decompose the scene into physically real geometries and virtual images produced by the reflections. However, current methods struggle to resolve the ambiguities in the decomposition process, because they mostly rely on mirror masks as external cues. They also fail to acquire accurate surface materials, which is essential for downstream applications of the recovered geometries. In this paper, we present RS-SpecSDF, a novel framework for indoor scene surface reconstruction that can faithfully reconstruct specular reflectors while accurately decomposing the reflection from the scene geometries and recovering the accurate specular fraction and diffuse appearance of the surface without requiring mirror masks. Our key idea is to perform reflection ray-casting and use it as supervision for the decomposition of reflection and surface material. Our method is based on an observation that the virtual image seen by the camera ray should be consistent with the object that the ray hits after reflecting off the specular surface. To leverage this constraint, we propose the Reflection Consistency Loss and Reflection Certainty Loss to regularize the decomposition. Experiments conducted on both our newly-proposed synthetic dataset and a real-captured dataset demonstrate that our method achieves high-quality surface reconstruction and accurate material decomposition results without the need of mirror masks.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101277"},"PeriodicalIF":2.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDM: Large tensorial SDF model for textured mesh generation LDM:用于纹理网格生成的大张量SDF模型
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-21 DOI: 10.1016/j.gmod.2025.101271
Rengan Xie , Kai Huang , Xiaoliang Luo , Yizheng Chen , Lvchun Wang , Qi Wang , Qi Ye , Wei Chen , Wenting Zheng , Yuchi Huo
{"title":"LDM: Large tensorial SDF model for textured mesh generation","authors":"Rengan Xie ,&nbsp;Kai Huang ,&nbsp;Xiaoliang Luo ,&nbsp;Yizheng Chen ,&nbsp;Lvchun Wang ,&nbsp;Qi Wang ,&nbsp;Qi Ye ,&nbsp;Wei Chen ,&nbsp;Wenting Zheng ,&nbsp;Yuchi Huo","doi":"10.1016/j.gmod.2025.101271","DOIUrl":"10.1016/j.gmod.2025.101271","url":null,"abstract":"<div><div>Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a <strong>L</strong>arge tensorial S<strong>D</strong>F <strong>M</strong>odel, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at <span><span>https://github.com/rgxie/LDM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101271"},"PeriodicalIF":2.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of cross-derivatives for ribbon-based multi-sided surfaces 带状多面曲面交叉导数的优化
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-19 DOI: 10.1016/j.gmod.2025.101275
Erkan Gunpinar , A. Alper Tasmektepligil , Márton Vaitkus , Péter Salvi
{"title":"Optimization of cross-derivatives for ribbon-based multi-sided surfaces","authors":"Erkan Gunpinar ,&nbsp;A. Alper Tasmektepligil ,&nbsp;Márton Vaitkus ,&nbsp;Péter Salvi","doi":"10.1016/j.gmod.2025.101275","DOIUrl":"10.1016/j.gmod.2025.101275","url":null,"abstract":"<div><div>This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101275"},"PeriodicalIF":2.5,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder VolumeDiffusion:前馈文本到3d生成与高效的体积编码器
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-18 DOI: 10.1016/j.gmod.2025.101274
Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo
{"title":"VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder","authors":"Zhicong Tang ,&nbsp;Shuyang Gu ,&nbsp;Chunyu Wang ,&nbsp;Ting Zhang ,&nbsp;Jianmin Bao ,&nbsp;Dong Chen ,&nbsp;Baining Guo","doi":"10.1016/j.gmod.2025.101274","DOIUrl":"10.1016/j.gmod.2025.101274","url":null,"abstract":"<div><div>This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101274"},"PeriodicalIF":2.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Goal-oriented 3D pattern adjustment with machine learning 目标导向的3D模式调整与机器学习
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-17 DOI: 10.1016/j.gmod.2025.101272
Megha Shastry , Ye Fan , Clarissa Martins , Dinesh K. Pai
{"title":"Goal-oriented 3D pattern adjustment with machine learning","authors":"Megha Shastry ,&nbsp;Ye Fan ,&nbsp;Clarissa Martins ,&nbsp;Dinesh K. Pai","doi":"10.1016/j.gmod.2025.101272","DOIUrl":"10.1016/j.gmod.2025.101272","url":null,"abstract":"<div><div>Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired <em>fit attributes</em>. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101272"},"PeriodicalIF":2.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEDFMNet: A Simple and Efficient Unsupervised Functional Map for Shape Correspondence Based on Deconstruction SEDFMNet:一种简单高效的基于解构的形状对应的无监督函数映射
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-01 DOI: 10.1016/j.gmod.2025.101270
Haojun Xu , Qinsong Li , Ling Hu , Shengjun Liu , Haibo Wang , Xinru Liu
{"title":"SEDFMNet: A Simple and Efficient Unsupervised Functional Map for Shape Correspondence Based on Deconstruction","authors":"Haojun Xu ,&nbsp;Qinsong Li ,&nbsp;Ling Hu ,&nbsp;Shengjun Liu ,&nbsp;Haibo Wang ,&nbsp;Xinru Liu","doi":"10.1016/j.gmod.2025.101270","DOIUrl":"10.1016/j.gmod.2025.101270","url":null,"abstract":"<div><div>In recent years, deep functional maps (DFM) have emerged as a leading learning-based framework for non-rigid shape-matching problems, offering diverse network architectures for this domain. This richness also makes exploring better and novel design beliefs for existing powerful DFM components to promote performance meaningful and engaging. This paper delves into this problem and successfully produces the SEDFMNet, a simple yet highly efficient DFM pipeline. To achieve this, we systematically deconstruct the core modules of the general DFM framework and analyze key design choices in existing approaches to identify the most critical components through extensive experiments. By reassembling these crucial components, we culminate in developing our SEDFMNet, which features a simpler structure than conventional DFM pipelines while delivering superior performance. Our approach is rigorously validated through comprehensive experiments on diverse datasets, where the SEDFMNet consistently achieves state-of-the-art results, even in challenging scenarios such as non-isometric shape matching and shape matching with topological noise. Our work offers fresh insights into DFM research and opens new avenues for advancing this field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101270"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144203918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastClothGNN: Optimizing message passing in Graph Neural Networks for accelerating real-time cloth simulation FastClothGNN:优化图神经网络中的消息传递,加速实时布料模拟
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-06-01 DOI: 10.1016/j.gmod.2025.101273
Yang Zhang, Kailuo Yu, Xinyu Zhang
{"title":"FastClothGNN: Optimizing message passing in Graph Neural Networks for accelerating real-time cloth simulation","authors":"Yang Zhang,&nbsp;Kailuo Yu,&nbsp;Xinyu Zhang","doi":"10.1016/j.gmod.2025.101273","DOIUrl":"10.1016/j.gmod.2025.101273","url":null,"abstract":"<div><div>We present an efficient message aggregation algorithm FastClothGNN for Graph Neural Networks (GNNs) specifically designed for real-time cloth simulation in virtual try-on systems. Our approach reduces computational redundancy by optimizing neighbor sampling and minimizing unnecessary message-passing between cloth and obstacle nodes. This significantly accelerates the real-time performance of cloth simulation, making it ideal for interactive virtual environments. Our experiments demonstrate that our algorithm significantly enhances memory efficiency and improve the performance both in training and in inference in GNNs. This optimization enables our algorithm to be effectively applied to resource-constrained, providing users with more seamless and immersive interactions and thereby increasing the potential for practical real-time applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101273"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144240201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信