Graphical Models最新文献

筛选
英文 中文
A systematic approach for enhancement of homogeneous background images using structural information 利用结构信息增强均匀背景图像的系统方法
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-25 DOI: 10.1016/j.gmod.2023.101206
D. Vijayalakshmi , Malaya Kumar Nath
{"title":"A systematic approach for enhancement of homogeneous background images using structural information","authors":"D. Vijayalakshmi ,&nbsp;Malaya Kumar Nath","doi":"10.1016/j.gmod.2023.101206","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101206","url":null,"abstract":"<div><p>Image enhancement is an indispensable pre-processing step for several image processing applications. Mainly, histogram equalization is one of the widespread techniques used by various researchers to improve the image quality by expanding the pixel values to fill the entire dynamic grayscale. It results in the visual artifact, structural information loss near edges due to the information loss (due to many-to-one mapping), and alteration in average luminance to a higher value. This paper proposes an enhancement algorithm based on structural information for homogeneous background images. The intensities are divided into two segments using the median value to preserve the average luminance. Unlike traditional techniques, this algorithm incorporates the spatial locations in the equalization process instead of the number of intensity values occurrences. The occurrences of each intensity concerning their spatial locations are combined using Rènyi entropy to enumerate a discrete function. An adaptive clipping limit is applied to the discrete function to control the enhancement rate. Then histogram equalization is performed on each segment separately, and the equalized segments are integrated to produce an enhanced image. The algorithm’s effectiveness is validated by evaluating the proposed method on CEED, CSIQ, LOL, and TID2013 databases. Experimental results reveal that the proposed method improves the contrast while preserving structural information, detail information, and average luminance. They are quantified by the high value of contrast improvement index, structural similarity index, and discrete entropy, and low value of average mean brightness error values of the proposed method when compared with the methods available in the literature, including deep learning architectures.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101206"},"PeriodicalIF":1.7,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032300036X/pdfft?md5=66c749d2624c0d77acd46a4f2037626a&pid=1-s2.0-S152407032300036X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jrender: An efficient differentiable rendering library based on Jittor Jrender:一个基于Jittor的高效可微分渲染库
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-18 DOI: 10.1016/j.gmod.2023.101202
Hanggao Xin, Chenzhong Xiang, Wenyang Zhou, Dun Liang
{"title":"Jrender: An efficient differentiable rendering library based on Jittor","authors":"Hanggao Xin,&nbsp;Chenzhong Xiang,&nbsp;Wenyang Zhou,&nbsp;Dun Liang","doi":"10.1016/j.gmod.2023.101202","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101202","url":null,"abstract":"<div><p>Differentiable rendering has been proven as a powerful tool to bridge 2D images and 3D models. With the aid of differentiable rendering, tasks in computer vision and computer graphics could be solved more elegantly and accurately. To address challenges in the implementations of differentiable rendering methods, we present an efficient and modular differentiable rendering library named Jrender based on Jittor. Jrender supports surface rendering for 3D meshes and volume rendering for 3D volumes. Compared with previous differentiable renderers, Jrender exhibits a significant improvement in both performance and rendering quality. Due to the modular design, various rendering effects such as PBR materials shading, ambient occlusions, soft shadows, global illumination, and subsurface scattering could be easily supported in Jrender, which are not available in other differentiable rendering libraries. To validate our library, we integrate Jrender into applications such as 3D object reconstruction and NeRF, which show that our implementations could achieve the same quality with higher performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101202"},"PeriodicalIF":1.7,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packing problems on generalised regular grid: Levels of abstraction using integer linear programming 广义正则网格上的填充问题:用整数线性规划的抽象层次
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-07 DOI: 10.1016/j.gmod.2023.101205
Hao Hua , Benjamin Dillenburger
{"title":"Packing problems on generalised regular grid: Levels of abstraction using integer linear programming","authors":"Hao Hua ,&nbsp;Benjamin Dillenburger","doi":"10.1016/j.gmod.2023.101205","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101205","url":null,"abstract":"<div><p>Packing a designated set of shapes on a regular grid is an important class of operations research problems that has been intensively studied for more than six decades. Representing a <span><math><mi>d</mi></math></span>-dimensional discrete grid as <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>, we formalise the generalised regular grid (GRG) as a surjective function from <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> to a geometric tessellation in a physical space, for example, the cube coordinates of a hexagonal grid or a quasilattice. This study employs 0-1 integer linear programming (ILP) to formulate the polyomino tiling problem with adjacency constraints. Rotation &amp; reflection invariance in adjacency are considered. We separate the formal ILP from the topology &amp; geometry of various grids, such as Ammann-Beenker tiling, Penrose tiling and periodic hypercube. Based on cutting-edge solvers, we reveal an intuitive correspondence between the integer program (a pattern of algebraic rules) and the computer codes. Models of packing problems in the GRG have wide applications in production system, facility layout planning, and architectural design. Two applications in planning high-rise residential apartments are illustrated.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101205"},"PeriodicalIF":1.7,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RFMNet: Robust Deep Functional Maps for unsupervised non-rigid shape correspondence RFMNet:无监督非刚性形状对应的鲁棒深度函数映射
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101189
Ling Hu , Qinsong Li , Shengjun Liu , Dong-Ming Yan , Haojun Xu , Xinru Liu
{"title":"RFMNet: Robust Deep Functional Maps for unsupervised non-rigid shape correspondence","authors":"Ling Hu ,&nbsp;Qinsong Li ,&nbsp;Shengjun Liu ,&nbsp;Dong-Ming Yan ,&nbsp;Haojun Xu ,&nbsp;Xinru Liu","doi":"10.1016/j.gmod.2023.101189","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101189","url":null,"abstract":"<div><p>In traditional deep functional maps for non-rigid shape correspondence, estimating a functional map including high-frequency information requires enough linearly independent features via the least square method, which is prone to be violated in practice, especially at an early stage of training, or costly post-processing, e.g. ZoomOut. In this paper, we propose a novel method called RFMNet (<strong>R</strong>obust Deep <strong>F</strong>unctional <strong>M</strong>ap <strong>Net</strong>works), which jointly considers training stability and more geometric shape features than previous works. We directly first produce a pointwise map by resorting to optimal transport and then convert it to an initial functional map. Such a mechanism mitigates the requirements for the descriptor and avoids the training instabilities resulting from the least square solver. Benefitting from the novel strategy, we successfully integrate a state-of-the-art geometric regularization for further optimizing the functional map, which substantially filters the initial functional map. We show our novel computing functional map module brings more stable training even under encoding the functional map with high-frequency information and faster convergence speed. Considering the pointwise and functional maps, an unsupervised loss is presented for penalizing the correspondence distortion of Delta functions between shapes. To catch discretization-resistant and orientation-aware shape features with our network, we utilize DiffusionNet as a feature extractor. Experimental results demonstrate our apparent superiority in correspondence quality and generalization across various shape discretizations and different datasets compared to the state-of-the-art learning methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101189"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixNet: Mix different networks for learning 3D implicit representations MixNet:混合不同的网络来学习3D隐式表示
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101190
Bowen Lyu , Li-Yong Shen , Chun-Ming Yuan
{"title":"MixNet: Mix different networks for learning 3D implicit representations","authors":"Bowen Lyu ,&nbsp;Li-Yong Shen ,&nbsp;Chun-Ming Yuan","doi":"10.1016/j.gmod.2023.101190","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101190","url":null,"abstract":"<div><p>We introduce a neural network, MixNet, for learning implicit representations of 3D subtle models with large smooth areas and exact shape details in the form of interpolation of two different implicit functions. Our network takes a point cloud as input and uses conventional MLP networks and SIREN networks to predict different implicit fields. We use a learnable interpolation function to combine the implicit values of these two networks and achieve the respective advantages of them. The network is self-supervised with only reconstruction loss, leading to faithful 3D reconstructions with smooth planes, correct details, and plausible spatial partition without any ground-truth segmentation. We evaluate our method on ABC, the largest and most diverse CAD dataset, and some typical shapes to test in terms of geometric correctness and surface smoothness to demonstrate superiority over current alternatives suitable for shape reconstruction.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101190"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49890152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast progressive polygonal approximations for online strokes 快速渐进多边形近似在线笔画
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101200
Mohammad Tanvir Parvez
{"title":"Fast progressive polygonal approximations for online strokes","authors":"Mohammad Tanvir Parvez","doi":"10.1016/j.gmod.2023.101200","DOIUrl":"10.1016/j.gmod.2023.101200","url":null,"abstract":"<div><p>This paper presents a fast and progressive polygonal approximation algorithm for online strokes. A stroke is defined as a sequence of points between a pen-down and a pen-up. The proposed method generates polygonal approximations progressively as the user inputs the stroke. The proposed algorithm is suitable for real time shape modeling and retrieval. The number of operations used in the proposed algorithm is bounded by O(<em>n</em>), where <em>n</em> is the number of points in a stroke. Detailed experimental results show that the proposed method is not only fast, but also accurate enough compared to other reported algorithms.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101200"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43203570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unified shape and appearance reconstruction with joint camera parameter refinement 联合摄像机参数细化统一形状和外观重建
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101193
Julian Kaltheuner, Patrick Stotko, Reinhard Klein
{"title":"Unified shape and appearance reconstruction with joint camera parameter refinement","authors":"Julian Kaltheuner,&nbsp;Patrick Stotko,&nbsp;Reinhard Klein","doi":"10.1016/j.gmod.2023.101193","DOIUrl":"10.1016/j.gmod.2023.101193","url":null,"abstract":"<div><p>In this paper, we present an inverse rendering method for the simple reconstruction of shape and appearance of real-world objects from only roughly calibrated RGB images captured under collocated point light illumination. To this end, we gradually reconstruct the lower-frequency geometry information using automatically generated occupancy mask images based on a visual hull initialization of the mesh, to infer the object topology, and a smoothness-preconditioned optimization. By combining this geometry estimation with learning-based SVBRDF parameter inference as well as intrinsic and extrinsic camera parameter refinement in a joint and unified formulation, our novel method is able to reconstruct shape and an isotropic SVBRDF from fewer input images than previous methods. Unlike in other works, we also estimate normal maps as part of the SVBRDF to capture and represent higher-frequency geometric details in a compact way. Furthermore, by regularizing the appearance estimation with a GAN-based SVBRDF generator, we are able to meaningfully limit the solution space. In summary, this leads to a robust automatic reconstruction algorithm for shape and appearance. We evaluated our algorithm on synthetic as well as on real-world data and demonstrate that our method is able to reconstruct complex objects with high-fidelity reflection properties in a robust way, also in the presence of imperfect camera parameter data.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101193"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43340086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised learning of style-aware facial animation from real acting performances 从真实表演中无监督地学习风格感知面部动画
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101199
Wolfgang Paier , Anna Hilsmann , Peter Eisert
{"title":"Unsupervised learning of style-aware facial animation from real acting performances","authors":"Wolfgang Paier ,&nbsp;Anna Hilsmann ,&nbsp;Peter Eisert","doi":"10.1016/j.gmod.2023.101199","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101199","url":null,"abstract":"<div><p>This paper presents a novel approach for text/speech-driven animation of a photo-realistic head model based on blend-shape geometry, dynamic textures, and neural rendering. Training a VAE for geometry and texture yields a parametric model for accurate capturing and realistic synthesis of facial expressions from a latent feature vector. Our animation method is based on a conditional CNN that transforms text or speech into a sequence of animation parameters. In contrast to previous approaches, our animation model learns disentangling/synthesizing different acting-styles in an unsupervised manner, requiring only phonetic labels that describe the content of training sequences. For realistic real-time rendering, we train a U-Net that refines rasterization-based renderings by computing improved pixel colors and a foreground matte. We compare our framework qualitatively/quantitatively against recent methods for head modeling as well as facial animation and evaluate the perceived rendering/animation quality in a user-study, which indicates large improvements compared to state-of-the-art approaches.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101199"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint data and feature augmentation for self-supervised representation learning on point clouds 点云上自监督表示学习的联合数据和特征增强
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101188
Zhuheng Lu , Yuewei Dai , Weiqing Li , Zhiyong Su
{"title":"Joint data and feature augmentation for self-supervised representation learning on point clouds","authors":"Zhuheng Lu ,&nbsp;Yuewei Dai ,&nbsp;Weiqing Li ,&nbsp;Zhiyong Su","doi":"10.1016/j.gmod.2023.101188","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101188","url":null,"abstract":"<div><p>To deal with the exhausting annotations, self-supervised representation learning from unlabeled point clouds has drawn much attention, especially centered on augmentation-based contrastive methods. However, specific augmentations hardly produce sufficient transferability to high-level tasks on different datasets. Besides, augmentations on point clouds may also change underlying semantics. To address the issues, we propose a simple but efficient augmentation fusion contrastive learning framework to combine data augmentations in Euclidean space and feature augmentations in feature space. In particular, we propose a data augmentation method based on sampling and graph generation. Meanwhile, we design a data augmentation network to enable a correspondence of representations by maximizing consistency between augmented graph pairs. We further design a feature augmentation network that encourages the model to learn representations invariant to the perturbations using an encoder perturbation. We comprehensively conduct extensive object classification experiments and object part segmentation experiments to validate the transferability of the proposed framework. Experimental results demonstrate that the proposed framework is effective to learn the point cloud representation in a self-supervised manner, and yields state-of-the-art results in the community. The source code is publicly available at: <span>https://github.com/VCG-NJUST/AFSRL</span><svg><path></path></svg>.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101188"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Realistic simulation of fruit mildew diseases: Skin discoloration, fungus growth and volume shrinkage 水果霉菌病的真实模拟:皮肤变色、真菌生长和体积缩小
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101194
Yixin Xu , Shiguang Liu
{"title":"Realistic simulation of fruit mildew diseases: Skin discoloration, fungus growth and volume shrinkage","authors":"Yixin Xu ,&nbsp;Shiguang Liu","doi":"10.1016/j.gmod.2023.101194","DOIUrl":"10.1016/j.gmod.2023.101194","url":null,"abstract":"<div><p>Time-varying effects simulation plays a critical role in computer graphics. Fruit diseases are typical time-varying phenomena. Due to the biological complexity, the existing methods fail to represent the biodiversity and biological law of symptoms. To this end, this paper proposes a biology-aware, physically-based framework that respects biological knowledge for realistic simulation of fruit mildew diseases. The simulated symptoms include skin discoloration, fungus growth, and volume shrinkage. Specifically, we take advantage of both the zero-order kinetic model and reaction–diffusion model to represent the complex fruit skin discoloration related to skin biological characteristics. To reproduce 3D mildew growth, we employ the Poisson-disk sampling technique and propose a template model instancing method. One can flexibly change hyphal template models to characterize the fungal biological diversity. To model the fruit’s biological structure, we fill the fruit mesh interior with particles in a biologically-based arrangement. Based on this structure, we propose a turgor pressure and a Lennard-Jones force-based adaptive mass–spring system to simulate the fruit shrinkage in a biological manner. Experiments verified that the proposed framework can effectively simulate mildew diseases, including gray mold, powdery mildew, and downy mildew. Our results are visually compelling and close to the ground truth. Both quantitative and qualitative experiments validated the proposed method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101194"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41392551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信