Graphical Models最新文献

筛选
英文 中文
GarTemFormer: Temporal transformer-based for optimizing virtual garment animation GarTemFormer:基于时间变换器的虚拟服装动画优化工具
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-10-11 DOI: 10.1016/j.gmod.2024.101235
Jiazhe Miao , Tao Peng , Fei Fang , Xinrong Hu , Li Li
{"title":"GarTemFormer: Temporal transformer-based for optimizing virtual garment animation","authors":"Jiazhe Miao ,&nbsp;Tao Peng ,&nbsp;Fei Fang ,&nbsp;Xinrong Hu ,&nbsp;Li Li","doi":"10.1016/j.gmod.2024.101235","DOIUrl":"10.1016/j.gmod.2024.101235","url":null,"abstract":"<div><div>Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101235"},"PeriodicalIF":2.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building semantic segmentation from large-scale point clouds via primitive recognition 通过基元识别从大规模点云构建语义分割
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-10-10 DOI: 10.1016/j.gmod.2024.101234
Chiara Romanengo , Daniela Cabiddu , Simone Pittaluga, Michela Mortara
{"title":"Building semantic segmentation from large-scale point clouds via primitive recognition","authors":"Chiara Romanengo ,&nbsp;Daniela Cabiddu ,&nbsp;Simone Pittaluga,&nbsp;Michela Mortara","doi":"10.1016/j.gmod.2024.101234","DOIUrl":"10.1016/j.gmod.2024.101234","url":null,"abstract":"<div><div>Modelling objects at a large resolution or scale brings challenges in the storage and processing of data and requires efficient structures. In the context of modelling urban environments, we face both issues: 3D data from acquisition extends at geographic scale, and digitization of buildings of historical value can be particularly dense. Therefore, it is crucial to exploit the point cloud derived from acquisition as much as possible, before (or alongside) deriving other representations (e.g., surface or volume meshes) for further needs (e.g., visualization, simulation). In this paper, we present our work in processing 3D data of urban areas towards the generation of a semantic model for a city digital twin. Specifically, we focus on the recognition of shape primitives (e.g., planes, cylinders, spheres) in point clouds representing urban scenes, with the main application being the semantic segmentation into walls, roofs, streets, domes, vaults, arches, and so on.</div><div>Here, we extend the conference contribution in Romanengo et al. (2023a), where we presented our preliminary results on single buildings. In this extended version, we generalize the approach to manage whole cities by preliminarily splitting the point cloud building-wise and streamlining the pipeline. We added a thorough experimentation with a benchmark dataset from the city of Tallinn (47,000 buildings), a portion of Vaihingen (170 building) and our case studies in Catania and Matera, Italy (4 high-resolution buildings). Results show that our approach successfully deals with point clouds of considerable size, either surveyed at high resolution or covering wide areas. In both cases, it proves robust to input noise and outliers but sensitive to uneven sampling density.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101234"},"PeriodicalIF":2.5,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning-based point cloud completion methods: A review 基于深度学习的点云补全方法:综述
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-10-03 DOI: 10.1016/j.gmod.2024.101233
Kun Zhang , Ao Zhang , Xiaohong Wang , Weisong Li
{"title":"Deep-learning-based point cloud completion methods: A review","authors":"Kun Zhang ,&nbsp;Ao Zhang ,&nbsp;Xiaohong Wang ,&nbsp;Weisong Li","doi":"10.1016/j.gmod.2024.101233","DOIUrl":"10.1016/j.gmod.2024.101233","url":null,"abstract":"<div><div>Point cloud completion aims to utilize algorithms to repair missing parts in 3D data for high-quality point clouds. This technology is crucial for applications such as autonomous driving and urban planning. With deep learning’s progress, the robustness and accuracy of point cloud completion have improved significantly. However, the quality of completed point clouds requires further enhancement to satisfy practical requirements. In this study, we conducted an extensive survey of point cloud completion methods, with the following main objectives: (i) We classified point cloud completion methods into categories based on their principles, such as point-based, convolution-based, GAN-based, and geometry-based methods, and thoroughly investigated the advantages and limitations of each category. (ii) We collected publicly available datasets for point cloud completion algorithms and conducted experimental comparisons using various typical deep-learning networks to draw conclusions. (iii) With our research in this paper, we discuss future research trends in this rapidly evolving field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101233"},"PeriodicalIF":2.5,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sketch-2-4D: Sketch driven dynamic 3D scene generation Sketch-2-4D:草图驱动动态 3D 场景生成
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-09-16 DOI: 10.1016/j.gmod.2024.101231
Guo-Wei Yang, Dong-Yu Chen, Tai-Jiang Mu
{"title":"Sketch-2-4D: Sketch driven dynamic 3D scene generation","authors":"Guo-Wei Yang,&nbsp;Dong-Yu Chen,&nbsp;Tai-Jiang Mu","doi":"10.1016/j.gmod.2024.101231","DOIUrl":"10.1016/j.gmod.2024.101231","url":null,"abstract":"<div><p>Sketch-based content generation offers flexible controllability, making it a promising narrative avenue in film production. Directors often visualize their imagination by crafting storyboards using sketches and textual descriptions for each shot. However, current video generation methods suffer from three-dimensional inconsistencies, with notably artifacts during large motion or camera pans around scenes. A suitable solution is to directly generate 4D scene, enabling consistent dynamic three-dimensional scenes generation. We define the Sketch-2-4D problem, aiming to enhance controllability and consistency in this context. We propose a novel Control Score Distillation Sampling (SDS-C) for sketch-based 4D scene generation, providing precise control over scene dynamics. We further design Spatial Consistency Modules and Temporal Consistency Modules to tackle the temporal and spatial inconsistencies introduced by sketch-based control, respectively. Extensive experiments have demonstrated the effectiveness of our approach.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101231"},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000195/pdfft?md5=12c973a601d5430e660ae4453ec0a4d8&pid=1-s2.0-S1524070324000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FACE: Feature-preserving CAD model surface reconstruction FACE:保留特征的 CAD 模型表面重建
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-09-12 DOI: 10.1016/j.gmod.2024.101230
Shuxian Cai , Yuanyan Ye , Juan Cao , Zhonggui Chen
{"title":"FACE: Feature-preserving CAD model surface reconstruction","authors":"Shuxian Cai ,&nbsp;Yuanyan Ye ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2024.101230","DOIUrl":"10.1016/j.gmod.2024.101230","url":null,"abstract":"<div><p>Feature lines play a pivotal role in the reconstruction of CAD models. Currently, there is a lack of a robust explicit reconstruction algorithm capable of achieving sharp feature reconstruction in point clouds with noise and non-uniformity. In this paper, we propose a feature-preserving CAD model surface reconstruction algorithm, named FACE. The algorithm initiates with preprocessing the point cloud through denoising and resampling steps, resulting in a high-quality point cloud that is devoid of noise and uniformly distributed. Then, it employs discrete optimal transport to detect feature regions and subsequently generates dense points along potential feature lines to enhance features. Finally, the advancing-front surface reconstruction method, based on normal vector directions, is applied to reconstruct the enhanced point cloud. Extensive experiments demonstrate that, for contaminated point clouds, this algorithm excels not only in reconstructing straight edges and corner points but also in handling curved edges and surfaces, surpassing existing methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101230"},"PeriodicalIF":2.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000183/pdfft?md5=c92c397f0636a8c7097baed24a31ef77&pid=1-s2.0-S1524070324000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image vectorization using a sparse patch layout 使用稀疏补丁布局进行图像矢量化
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-09-05 DOI: 10.1016/j.gmod.2024.101229
K. He, J.B.T.M. Roerdink, J. Kosinka
{"title":"Image vectorization using a sparse patch layout","authors":"K. He,&nbsp;J.B.T.M. Roerdink,&nbsp;J. Kosinka","doi":"10.1016/j.gmod.2024.101229","DOIUrl":"10.1016/j.gmod.2024.101229","url":null,"abstract":"<div><p>Mesh-based image vectorization techniques have been studied for a long time, mostly owing to their compactness and flexibility in capturing image features. However, existing methods often lead to relatively dense meshes, especially when applied to images with high-frequency details or textures. We present a novel method that automatically vectorizes an image into a sparse collection of Coons patches whose size adapts to image features. To balance the number of patches and the accuracy of feature alignment, we generate the layout based on a harmonic cross field constrained by image features. We support T-junctions, which keeps the number of patches low and ensures local adaptation to feature density, naturally complemented by varying mesh-color resolution over the patches. Our experimental results demonstrate the utility, accuracy, and sparsity of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101229"},"PeriodicalIF":2.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000171/pdfft?md5=68d700973ee613d865f875bbdad4d05d&pid=1-s2.0-S1524070324000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to Image restoration for digital line drawings using line masks [Graphical Models 135 (2024) 101226] 使用线条掩码进行数字线条图的图像修复[图形模型 135 (2024) 101226] 更正
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-09-02 DOI: 10.1016/j.gmod.2024.101228
Yan Zhu, Yasushi Yamaguchi
{"title":"Corrigendum to Image restoration for digital line drawings using line masks [Graphical Models 135 (2024) 101226]","authors":"Yan Zhu,&nbsp;Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101228","DOIUrl":"10.1016/j.gmod.2024.101228","url":null,"abstract":"","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101228"},"PeriodicalIF":2.5,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032400016X/pdfft?md5=c31a932ed00cc957b9680b9f31021df7&pid=1-s2.0-S152407032400016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image restoration for digital line drawings using line masks 使用线条遮罩修复数字线条图的图像
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-20 DOI: 10.1016/j.gmod.2024.101226
Yan Zhu, Yasushi Yamaguchi
{"title":"Image restoration for digital line drawings using line masks","authors":"Yan Zhu,&nbsp;Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101226","DOIUrl":"10.1016/j.gmod.2024.101226","url":null,"abstract":"<div><p>The restoration of digital images holds practical significance due to the fact that degradation of digital image data on the internet is common. State-of-the-art image restoration methods usually employ end-to-end trained networks. However, we argue that a network trained with diverse image pairs is not optimal for restoring line drawings which have extensive plain backgrounds. We propose a line-drawing restoration framework which takes a restoration neural network as backbone and processes an input degraded line drawing in two steps. First, a proposed mask-predicting network predicts a line mask which indicates the possible location of foreground and background in the potential original line drawing. Next, we feed the degraded input line drawing together with the predicted line mask into the backbone restoration network. The traditional <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> loss for the backbone restoration network is substituted with a masked Mean Square Error (MSE) loss. We test our framework on two classical image restoration tasks: JPEG restoration and super-resolution, and experiments demonstrate that our framework can achieve better quantitative and visual results in most cases.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101226"},"PeriodicalIF":2.5,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000146/pdfft?md5=58619f9331f768a8dedffc9dc70f4dbb&pid=1-s2.0-S1524070324000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves 通过提取中心点和利用 NURBS 曲线重构自由形态弯曲部件的弯曲线
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-19 DOI: 10.1016/j.gmod.2024.101227
Lorenzo Scandola, Maximilian Erber, Philipp Hagenlocher, Florian Steinlehner, Wolfram Volk
{"title":"Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves","authors":"Lorenzo Scandola,&nbsp;Maximilian Erber,&nbsp;Philipp Hagenlocher,&nbsp;Florian Steinlehner,&nbsp;Wolfram Volk","doi":"10.1016/j.gmod.2024.101227","DOIUrl":"10.1016/j.gmod.2024.101227","url":null,"abstract":"<div><p>Free-form bending belongs to the kinematics-based forming processes and allows the manufacturing of arbitrary 3D-bent components. To obtain the desired part, the tool kinematics is adjusted by comparing the target and obtained bending line. While the target geometry consists of parametric CAD data, the obtained geometry is a surface mesh, making the bending line extraction a challenging task. In this paper the reconstruction of the bending line for free-form bent components is presented. The strategy relies on the extraction of the centroids, for which a ray casting algorithm is developed and compared to an existing Voronoi-based method. Subsequently the obtained points are used to fit a NURBS parametric model of the curve. The algorithm parameters are investigated with a sensitivity analysis, and its performance is evaluated with a defined error metric. Finally, the strategy is validated comparing its results with a Voronoi-based algorithm, and investigating different cross-sections and geometries.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101227"},"PeriodicalIF":2.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000158/pdfft?md5=5ae58aca47e71146ef63b6cd34d29835&pid=1-s2.0-S1524070324000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering 基于网格变形的单视角薄眼镜架三维重建与可变渲染
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-09 DOI: 10.1016/j.gmod.2024.101225
Fan Zhang , Ziyue Ji , Weiguang Kang , Weiqing Li , Zhiyong Su
{"title":"Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering","authors":"Fan Zhang ,&nbsp;Ziyue Ji ,&nbsp;Weiguang Kang ,&nbsp;Weiqing Li ,&nbsp;Zhiyong Su","doi":"10.1016/j.gmod.2024.101225","DOIUrl":"10.1016/j.gmod.2024.101225","url":null,"abstract":"<div><p>With the support of Virtual Reality (VR) and Augmented Reality (AR) technologies, the 3D virtual eyeglasses try-on application is well on its way to becoming a new trending solution that offers a “try on” option to select the perfect pair of eyeglasses at the comfort of your own home. Reconstructing eyeglasses frames from a single image with traditional depth and image-based methods is extremely difficult due to their unique characteristics such as lack of sufficient texture features, thin elements, and severe self-occlusions. In this paper, we propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image, leveraging prior and domain-specific knowledge. Specifically, based on the construction of a synthetic eyeglasses frame dataset, we first define a class-specific eyeglasses frame template with pre-defined keypoints. Then, given an input eyeglasses frame image with thin structure and few texture features, we design a keypoint detector and refiner to detect predefined keypoints in a coarse-to-fine manner to estimate the camera pose accurately. After that, using differentiable rendering, we propose a novel optimization approach for producing correct geometry by progressively performing free-form deformation (FFD) on the template mesh. We define a series of loss functions to enforce consistency between the rendered result and the corresponding RGB input, utilizing constraints from inherent structure, silhouettes, keypoints, per-pixel shading information, and so on. Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101225"},"PeriodicalIF":2.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000134/pdfft?md5=429e33b8e8d8f39cf8d47fa19b9c19f2&pid=1-s2.0-S1524070324000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信