Computer Graphics Forum最新文献

筛选
英文 中文
A Hybrid Parametrization Method for B-Spline Curve Interpolation via Supervised Learning 通过监督学习实现 B-样条曲线插值的混合参数化方法
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15240
Tianyu Song, Tong Shen, Linlin Ge, Jieqing Feng
{"title":"A Hybrid Parametrization Method for B-Spline Curve Interpolation via Supervised Learning","authors":"Tianyu Song,&nbsp;Tong Shen,&nbsp;Linlin Ge,&nbsp;Jieqing Feng","doi":"10.1111/cgf.15240","DOIUrl":"https://doi.org/10.1111/cgf.15240","url":null,"abstract":"<p>B-spline curve interpolation is a fundamental algorithm in computer-aided geometric design. Determining suitable parameters based on data points distribution has always been an important issue for high-quality interpolation curves generation. Various parameterization methods have been proposed. However, there is no universally satisfactory method that is applicable to data points with diverse distributions. In this work, a hybrid parametrization method is proposed to overcome the problem. For a given set of data points, a classifier via supervised learning identifies an optimal local parameterization method based on the local geometric distribution of four adjacent data points, and the optimal local parameters are computed using the selected optimal local parameterization method for the four adjacent data points. Then a merging method is employed to calculate global parameters which align closely with the local parameters. Experiments demonstrate that the proposed hybrid parameterization method well adapts the different distributions of data points statistically. The proposed method has a flexible and scalable framework, which can includes current and potential new parameterization methods as its components.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLTScene: Global-to-Local Transformers for Indoor Scene Synthesis with General Room Boundaries GLTScene:用于具有一般房间边界的室内场景合成的全局到局部变换器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15236
Yijie Li, Pengfei Xu, Junquan Ren, Zefan Shao, Hui Huang
{"title":"GLTScene: Global-to-Local Transformers for Indoor Scene Synthesis with General Room Boundaries","authors":"Yijie Li,&nbsp;Pengfei Xu,&nbsp;Junquan Ren,&nbsp;Zefan Shao,&nbsp;Hui Huang","doi":"10.1111/cgf.15236","DOIUrl":"https://doi.org/10.1111/cgf.15236","url":null,"abstract":"<p>We present GLTScene, a novel data-driven method for high-quality furniture layout synthesis with general room boundaries as conditions. This task is challenging since the existing indoor scene datasets do not cover the variety of general room boundaries. We incorporate the interior design principles with learning techniques and adopt a global-to-local strategy for this task. Globally, we learn the placement of furniture objects from the datasets without considering their alignment. Locally, we learn the alignment of furniture objects relative to their nearest walls, according to the alignment principle in interior design. The global placement and local alignment of furniture objects are achieved by two transformers respectively. We compare our method with several baselines in the task of furniture layout synthesis with general room boundaries as conditions. Our method outperforms these baselines both quantitatively and qualitatively. We also demonstrate that our method can achieve other conditional layout synthesis tasks, including object-level conditional generation and attribute-level conditional generation. The code is publicly available at https://github.com/WWalter-Lee/GLTScene.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoupNeRF: Property-aware Neural Radiance Fields for Multi-Material Coupled Scenario Reconstruction CoupNeRF:用于多材料耦合场景重构的属性感知神经辐射场
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15208
Jin Li, Yang Gao, Wenfeng Song, Yacong Li, Shuai Li, Aimin Hao, Hong Qin
{"title":"CoupNeRF: Property-aware Neural Radiance Fields for Multi-Material Coupled Scenario Reconstruction","authors":"Jin Li,&nbsp;Yang Gao,&nbsp;Wenfeng Song,&nbsp;Yacong Li,&nbsp;Shuai Li,&nbsp;Aimin Hao,&nbsp;Hong Qin","doi":"10.1111/cgf.15208","DOIUrl":"https://doi.org/10.1111/cgf.15208","url":null,"abstract":"<p>Neural Radiance Fields (NeRFs) have achieved significant recognition for their proficiency in scene reconstruction and rendering by utilizing neural networks to depict intricate volumetric environments. Despite considerable research dedicated to reconstructing physical scenes, rare works succeed in challenging scenarios involving dynamic, multi-material objects. To alleviate, we introduce CoupNeRF, an efficient neural network architecture that is aware of multiple material properties. This architecture combines physically grounded continuum mechanics with NeRF, facilitating the identification of motion systems across a wide range of physical coupling scenarios. We first reconstruct specific-material of objects within 3D physical fields to learn material parameters. Then, we develop a method to model the neighbouring particles, enhancing the learning process specifically in regions where material transitions occur. The effectiveness of CoupNeRF is demonstrated through extensive experiments, showcasing its proficiency in accurately coupling and identifying the behavior of complex physical scenes that span multiple physics domains.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSGI-Net: Density-based Selective Grouping Point Cloud Learning Network for Indoor Scene DSGI-Net:用于室内场景的基于密度的选择性分组点云学习网络
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15218
Xin Wen, Yao Duan, Kai Xu, Chenyang Zhu
{"title":"DSGI-Net: Density-based Selective Grouping Point Cloud Learning Network for Indoor Scene","authors":"Xin Wen,&nbsp;Yao Duan,&nbsp;Kai Xu,&nbsp;Chenyang Zhu","doi":"10.1111/cgf.15218","DOIUrl":"https://doi.org/10.1111/cgf.15218","url":null,"abstract":"<p>Indoor scene point clouds exhibit diverse distributions and varying levels of sparsity, characterized by more intricate geometry and occlusion compared to outdoor scenes or individual objects. Despite recent advancements in 3D point cloud analysis introducing various network architectures, there remains a lack of frameworks tailored to the unique attributes of indoor scenarios. To address this, we propose DSGI-Net, a novel indoor scene point cloud learning network that can be integrated into existing models. The key innovation of this work is selectively grouping more informative neighbor points in sparse regions and promoting semantic consistency of the local area where different instances are in proximity but belong to distinct categories. Furthermore, our method encodes both semantic and spatial relationships between points in local regions to reduce the loss of local geometric details. Extensive experiments on the ScanNetv2, SUN RGB-D, and S3DIS indoor scene benchmarks demonstrate that our method is straightforward yet effective.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutive 3D Urban Data Representation through Timeline Design Space 通过时间轴设计空间进化三维城市数据表示
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15237
C. Le Bihan Gautier, J. Delanoy, G. Gesquière
{"title":"Evolutive 3D Urban Data Representation through Timeline Design Space","authors":"C. Le Bihan Gautier,&nbsp;J. Delanoy,&nbsp;G. Gesquière","doi":"10.1111/cgf.15237","DOIUrl":"https://doi.org/10.1111/cgf.15237","url":null,"abstract":"<p>Cities are constantly changing to adapt to new societal and environmental challenges. Understanding their evolution is thus essential to make informed decisions about their future. To capture these changes, cities are increasingly offering digital 3D snapshots of their territory over time. However, existing tools to visualise these data typically represent the city at a specific point in time, limiting a comprehensive analysis of its evolution. In this paper, we propose a new method for simultaneously visualising different versions of the city in a 3D space. We integrate the different versions of the city along a new way of 3D timeline that can take different shapes depending on the needs of the user and the dataset being visualised. We propose four different shapes of timelines and three ways to place the versions along it. Our method places the versions such that there is no visual overlap for the user by varying the parameters of the timelines, and offer options to ease the understanding of the scene by changing the orientation or scale of the versions. We evaluate our method on different datasets to demonstrate the advantages and limitations of the different shapes of timeline and provide recommendations so as to which shape to chose.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Aware Facial Image Shadow Removal through Skin Color and Texture Learning 通过皮肤颜色和纹理学习实现频率感知面部图像阴影去除
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15220
Ling Zhang, Wenyang Xie, Chunxia Xiao
{"title":"Frequency-Aware Facial Image Shadow Removal through Skin Color and Texture Learning","authors":"Ling Zhang,&nbsp;Wenyang Xie,&nbsp;Chunxia Xiao","doi":"10.1111/cgf.15220","DOIUrl":"https://doi.org/10.1111/cgf.15220","url":null,"abstract":"<p>Existing facial image shadow removal methods predominantly rely on pre-extracted facial features. However, these methods often fail to capitalize on the full potential of these features, resorting to simplified utilization. Furthermore, they tend to overlook the importance of low-frequency information during the extraction of prior features, which can be easily compromised by noises. In our work, we propose a frequency-aware shadow removal network (FSRNet) for facial image shadow removal, which utilizes the skin color and texture information in the face to help recover illumination in shadow regions. Our FSRNet uses a frequency-domain image decomposition network to extract the low-frequency skin color map and high-frequency texture map from the face images, and applies a color-texture guided shadow removal network to produce final shadow removal result. Concretely, the designed fourier sparse attention block (FSABlock) can transform images from the spatial domain to the frequency domain and help the network focus on the key information. We also introduce a skin color fusion module (CFModule) and a texture fusion module (TFModule) to enhance the understanding and utilization of color and texture features, promoting high-quality result without color distortion and detail blurring. Extensive experiments demonstrate the superiority of the proposed method. The code is available at https://github.com/laoxie521/FSRNet.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially and Temporally Optimized Audio-Driven Talking Face Generation 时空优化的音频驱动人脸生成技术
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15228
Biao Dong, Bo-Yao Ma, Lei Zhang
{"title":"Spatially and Temporally Optimized Audio-Driven Talking Face Generation","authors":"Biao Dong,&nbsp;Bo-Yao Ma,&nbsp;Lei Zhang","doi":"10.1111/cgf.15228","DOIUrl":"https://doi.org/10.1111/cgf.15228","url":null,"abstract":"<p>Audio-driven talking face generation is essentially a cross-modal mapping from audio to video frames. The main challenge lies in the intricate one-to-many mapping, which affects lip sync accuracy. And the loss of facial details during image reconstruction often results in visual artifacts in the generated video. To overcome these challenges, this paper proposes to enhance the quality of generated talking faces with a new spatio-temporal consistency. Specifically, the temporal consistency is achieved through consecutive frames of the each phoneme, which form temporal modules that exhibit similar lip appearance changes. This allows for adaptive adjustment in the lip movement for accurate sync. The spatial consistency pertains to the uniform distribution of textures within local regions, which form spatial modules and regulate the texture distribution in the generator. This yields fine details in the reconstructed facial images. Extensive experiments show that our method can generate more natural talking faces than previous state-of-the-art methods in both accurate lip sync and realistic facial details.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastFlow: GPU Acceleration of Flow and Depression Routing for Landscape Simulation FastFlow:GPU 加速景观仿真中的流动和凹陷路由选择
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15243
Aryamaan Jain, Bernhard Kerbl, James Gain, Brandon Finley, Guillaume Cordonnier
{"title":"FastFlow: GPU Acceleration of Flow and Depression Routing for Landscape Simulation","authors":"Aryamaan Jain,&nbsp;Bernhard Kerbl,&nbsp;James Gain,&nbsp;Brandon Finley,&nbsp;Guillaume Cordonnier","doi":"10.1111/cgf.15243","DOIUrl":"https://doi.org/10.1111/cgf.15243","url":null,"abstract":"<p>Terrain analysis plays an important role in computer graphics, hydrology and geomorphology. In particular, analyzing the path of material flow over a terrain with consideration of local depressions is a precursor to many further tasks in erosion, river formation, and plant ecosystem simulation. For example, fluvial erosion simulation used in terrain modeling computes water discharge to repeatedly locate erosion channels for soil removal and transport. Despite its significance, traditional methods face performance constraints, limiting their broader applicability.</p><p>In this paper, we propose a novel GPU flow routing algorithm that computes the water discharge in 𝒪(<i>log</i> n) iterations for a terrain with n vertices (assuming n processors). We also provide a depression routing algorithm to route the water out of local minima formed by depressions in the terrain, which converges in 𝒪(<i>log</i><sup>2</sup> n) iterations. Our implementation of these algorithms leads to a 5× speedup for flow routing and 34 × to 52 × speedup for depression routing compared to previous work on a 1024<sup>2</sup> terrain, enabling interactive control of terrain simulation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point-AGM : Attention Guided Masked Auto-Encoder for Joint Self-supervised Learning on Point Clouds 点-AGM:用于点云联合自监督学习的注意力引导掩码自动编码器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15219
Jie Liu, Mengna Yang, Yu Tian, Yancui Li, Da Song, Kang Li, Xin Cao
{"title":"Point-AGM : Attention Guided Masked Auto-Encoder for Joint Self-supervised Learning on Point Clouds","authors":"Jie Liu,&nbsp;Mengna Yang,&nbsp;Yu Tian,&nbsp;Yancui Li,&nbsp;Da Song,&nbsp;Kang Li,&nbsp;Xin Cao","doi":"10.1111/cgf.15219","DOIUrl":"https://doi.org/10.1111/cgf.15219","url":null,"abstract":"<p>Masked point modeling (MPM) has gained considerable attention in self-supervised learning for 3D point clouds. While existing self-supervised methods have progressed in learning from point clouds, we aim to address their limitation of capturing high-level semantics through our novel attention-guided masking framework, Point-AGM. Our approach introduces an attention-guided masking mechanism that selectively masks low-attended regions, enabling the model to concentrate on reconstructing more critical areas and addressing the limitations of random and block masking strategies. Furthermore, we exploit the inherent advantages of the teacher-student network to enable cross-view contrastive learning on augmented dual-view point clouds, enforcing consistency between complete and partially masked views of the same 3D shape in the feature space. This unified framework leverages the complementary strengths of masked point modeling, attention-guided masking, and contrastive learning for robust representation learning. Extensive experiments have shown the effectiveness of our approach and its well-transferable performance across various downstream tasks. Specifically, our model achieves an accuracy of 94.12% on ModelNet40 and 87.16% on the PB-T50-RS setting of ScanObjectNN, outperforming other self-supervised learning methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SOD-diffusion: Salient Object Detection via Diffusion-Based Image Generators SOD-diffusion:通过基于扩散的图像生成器进行突出物体检测
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-24 DOI: 10.1111/cgf.15251
Shuo Zhang, Jiaming Huang, Shizhe Chen, Yan Wu, Tao Hu, Jing Liu
{"title":"SOD-diffusion: Salient Object Detection via Diffusion-Based Image Generators","authors":"Shuo Zhang,&nbsp;Jiaming Huang,&nbsp;Shizhe Chen,&nbsp;Yan Wu,&nbsp;Tao Hu,&nbsp;Jing Liu","doi":"10.1111/cgf.15251","DOIUrl":"https://doi.org/10.1111/cgf.15251","url":null,"abstract":"<p>Salient Object Detection (SOD) is a challenging task that aims to precisely identify and segment the salient objects. However, existing SOD methods still face challenges in making explicit predictions near the edges and often lack end-to-end training capabilities. To alleviate these problems, we propose SOD-diffusion, a novel framework that formulates salient object detection as a denoising diffusion process from noisy masks to object masks. Specifically, object masks diffuse from ground-truth masks to random distribution in latent space, and the model learns to reverse this noising process to reconstruct object masks. To enhance the denoising learning process, we design an attention feature interaction module (AFIM) and a specific fine-tuning protocol to integrate conditional semantic features from the input image with diffusion noise embedding. Extensive experiments on five widely used SOD benchmark datasets demonstrate that our proposed SOD-diffusion achieves favorable performance compared to previous well-established methods. Furthermore, leveraging the outstanding generalization capability of SOD-diffusion, we applied it to publicly available images, generating high-quality masks that serve as an additional SOD benchmark testset.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信