Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences最新文献

筛选
英文 中文
Bayesian computation with generative diffusion models by Multilevel Monte Carlo. 多层蒙特卡罗生成扩散模型的贝叶斯计算。
IF 4.3 3区 综合性期刊
Luke Shaw, Abdul-Lateef Haji-Ali, Marcelo Pereyra, Konstantinos Zygalakis
{"title":"Bayesian computation with generative diffusion models by Multilevel Monte Carlo.","authors":"Luke Shaw, Abdul-Lateef Haji-Ali, Marcelo Pereyra, Konstantinos Zygalakis","doi":"10.1098/rsta.2024.0333","DOIUrl":"https://doi.org/10.1098/rsta.2024.0333","url":null,"abstract":"<p><p>Generative diffusion models have recently emerged as a powerful strategy to perform stochastic sampling in Bayesian inverse problems, delivering remarkably accurate solutions for a wide range of challenging applications. However, diffusion models often require a large number of neural function evaluations per sample in order to deliver accurate posterior samples. As a result, using diffusion models as stochastic samplers for Monte Carlo integration in Bayesian computation can be highly computationally expensive, particularly in applications that require a substantial number of Monte Carlo samples for conducting uncertainty quantification analyses. This cost is especially high in large-scale inverse problems such as computational imaging, which rely on large neural networks that are expensive to evaluate. With quantitative imaging applications in mind, this paper presents a Multilevel Monte Carlo strategy that significantly reduces the cost of Bayesian computation with diffusion models. This is achieved by exploiting cost-accuracy trade-offs inherent to diffusion models to carefully couple models of different levels of accuracy in a manner that significantly reduces the overall cost of the calculation, without reducing the final accuracy. The proposed approach achieves a [Formula: see text]-to-[Formula: see text] reduction in computational cost with respect to standard techniques across three benchmark imaging problems.This article is part of the theme issue 'Generative modelling meets Bayesian inference: a new paradigm for inverse problems'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2299","pages":"20240333"},"PeriodicalIF":4.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144326516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional sampling within generative diffusion models. 生成扩散模型中的条件抽样。
IF 4.3 3区 综合性期刊
Zheng Zhao, Ziwei Luo, Jens Sjölund, Thomas Schön
{"title":"Conditional sampling within generative diffusion models.","authors":"Zheng Zhao, Ziwei Luo, Jens Sjölund, Thomas Schön","doi":"10.1098/rsta.2024.0329","DOIUrl":"10.1098/rsta.2024.0329","url":null,"abstract":"<p><p>Generative diffusions are a powerful class of Monte Carlo samplers that leverage bridging Markov processes to approximate complex, high-dimensional distributions, such as those found in image processing and language models. Despite their success in these domains, an important open challenge remains: extending these techniques to sample from conditional distributions, as required in, for example, Bayesian inverse problems. In this paper, we present a comprehensive review of existing computational approaches to conditional sampling within generative diffusion models. Specifically, we highlight key methodologies that either utilize the joint distribution, or rely on (pre-trained) marginal distributions with explicit likelihoods, to construct conditional generative samplers.This article is part of the theme issue 'Generative modelling meets Bayesian inference: a new paradigm for inverse problems'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2299","pages":"20240329"},"PeriodicalIF":4.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144326519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative diffusion models in infinite dimensions: a survey. 无限维的生成扩散模型:综述。
IF 4.3 3区 综合性期刊
Giulio Franzese, Pietro Michiardi
{"title":"Generative diffusion models in infinite dimensions: a survey.","authors":"Giulio Franzese, Pietro Michiardi","doi":"10.1098/rsta.2024.0322","DOIUrl":"10.1098/rsta.2024.0322","url":null,"abstract":"<p><p>Diffusion models have recently emerged as a powerful class of generative models, achieving state-of-the-art performance in various domains such as image and audio synthesis. While most existing work focuses on finite-dimensional data, there is growing interest in extending diffusion models to infinite-dimensional function spaces. This survey provides a comprehensive overview of the theoretical foundations and practical applications of diffusion models in infinite dimensions. We review the necessary background on stochastic differential equations in Hilbert spaces, and then discuss different approaches to define generative models rooted in such formalism. Finally, we survey recent applications of infinite-dimensional diffusion models in areas such as generative modelling for function spaces, conditional generation of functional data and solving inverse problems. Throughout the survey, we highlight the connections between different approaches and discuss open problems and future research directions.This article is part of the theme issue 'Generative modelling meets Bayesian inference: a new paradigm for inverse problems'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2299","pages":"20240322"},"PeriodicalIF":4.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12201592/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144326522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inverse evolution data augmentation for neural PDE solvers. 神经PDE解算器的逆进化数据增强。
IF 4.3 3区 综合性期刊
Chaoyu Liu, Chris Budd, Carola-Bibiane Schönlieb
{"title":"Inverse evolution data augmentation for neural PDE solvers.","authors":"Chaoyu Liu, Chris Budd, Carola-Bibiane Schönlieb","doi":"10.1098/rsta.2024.0242","DOIUrl":"https://doi.org/10.1098/rsta.2024.0242","url":null,"abstract":"<p><p>Neural networks have emerged as promising tools for solving partial differential equations (PDEs), particularly through the application of neural operators. Training neural operators typically requires a large amount of training data to ensure accuracy and generalization. In this article, we propose a novel data augmentation method specifically designed for training neural operators on evolution equations. Our approach utilizes insights from inverse processes of these equations to efficiently generate data from random initialization that are combined with original data. To further enhance the accuracy of the augmented data, we introduce high-order inverse evolution schemes. These schemes consist of only a few explicit computation steps, yet the resulting data pairs can be proven to satisfy the corresponding implicit numerical schemes. In contrast to traditional PDE solvers that require small time steps or implicit schemes to guarantee accuracy, our data augmentation method employs explicit schemes with relatively large time steps, thereby significantly reducing computational costs. Accuracy and efficacy experiments confirm the effectiveness of our approach. In addition, we validate our approach through experiments with the Fourier neural operator (FNO) and UNet on three common evolution equations: Burgers' equation, the Allen-Cahn equation and the Navier-Stokes equation. The results demonstrate a significant improvement in the performance and robustness of the FNO when coupled with our inverse evolution data augmentation method.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240242"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Isotropic Q-fractional Brownian motion on the sphere: regularity and fast simulation. 球上各向同性q分数布朗运动:规律性和快速模拟。
IF 4.3 3区 综合性期刊
Annika Lang, Björn Müller
{"title":"Isotropic Q-fractional Brownian motion on the sphere: regularity and fast simulation.","authors":"Annika Lang, Björn Müller","doi":"10.1098/rsta.2024.0238","DOIUrl":"10.1098/rsta.2024.0238","url":null,"abstract":"<p><p>As an extension of isotropic Gaussian random fields and [Formula: see text]-Wiener processes on [Formula: see text]-dimensional spheres, isotropic [Formula: see text]-fractional Brownian motion is introduced and sample Hölder regularity in space-time is shown depending on the regularity of the spatial covariance operator [Formula: see text] and the Hurst parameter [Formula: see text]. The processes are approximated by a spectral method in space for which strong and almost sure convergence are shown. The underlying sample paths of fractional Brownian motion are simulated by circulant embedding or conditionalized random midpoint displacement. Temporal accuracy and computational complexity are numerically tested, the latter matching the complexity of simulating a [Formula: see text]-Wiener process if allowing for a temporal error.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240238"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139523/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wasserstein gradient flows of maximum mean discrepancy functionals with distance kernels under Sobolev regularization. Sobolev正则化下带距离核的最大平均差函数的Wasserstein梯度流。
IF 4.3 3区 综合性期刊
Richard Duong, Nicolaj Rux, Viktor Stein, Gabriele Steidl
{"title":"Wasserstein gradient flows of maximum mean discrepancy functionals with distance kernels under Sobolev regularization.","authors":"Richard Duong, Nicolaj Rux, Viktor Stein, Gabriele Steidl","doi":"10.1098/rsta.2024.0243","DOIUrl":"https://doi.org/10.1098/rsta.2024.0243","url":null,"abstract":"<p><p>We consider Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals, [Formula: see text] for positive and negative distance kernels [Formula: see text] and given target measures [Formula: see text] on [Formula: see text]. Since in one dimension, the Wasserstein space can be isometrically embedded into the cone [Formula: see text] of quantile functions, Wasserstein gradient flows can be characterized by the solution of an associated Cauchy problem on [Formula: see text]. While for the negative kernel, the MMD functional is geodesically convex, this is not the case for the positive kernel, which needs to be handled to ensure the existence of the flow. We propose to add a regularizing Sobolev term [Formula: see text] corresponding to the Laplacian with Neumann boundary conditions to the Cauchy problem of quantile functions. Indeed, this ensures the existence of a generalized minimizing movement (GMM) for the positive kernel. Furthermore, for the negative kernel, we demonstrate by numerical examples how the Laplacian rectifies a 'dissipation-of-mass' defect of the MMD gradient flow.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240243"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closing the ODE-SDE gap in score-based diffusion models through the Fokker-Planck equation. 通过Fokker-Planck方程缩小基于分数的扩散模型中ODE-SDE的差距。
IF 4.3 3区 综合性期刊
Teo Deveney, Jan Stanczuk, Lisa Kreusser, Chris Budd, Carola-Bibiane Schönlieb
{"title":"Closing the ODE-SDE gap in score-based diffusion models through the Fokker-Planck equation.","authors":"Teo Deveney, Jan Stanczuk, Lisa Kreusser, Chris Budd, Carola-Bibiane Schönlieb","doi":"10.1098/rsta.2024.0503","DOIUrl":"10.1098/rsta.2024.0503","url":null,"abstract":"<p><p>Score-based diffusion models have emerged as one of the most promising frameworks for deep generative modelling, due to both their mathematical foundations and their state-of-the art performance in many tasks. Empirically, it has been reported that samplers based on ordinary differential equations (ODEs) are inferior to those based on stochastic differential equations (SDEs). In this article, we systematically analyse the difference between the ODE and SDE dynamics of score-based diffusion models and show how this relates to an associated Fokker-Planck equation. We rigorously describe the full range of dynamics and approximations arising when training score-based diffusion models and derive a theoretical upper bound on the Wasserstein 2-distance between the ODE- and SDE-induced distributions in terms of a Fokker-Planck residual. We also show numerically that conventional score-based diffusion models can exhibit significant differences between ODE- and SDE-induced distributions that we demonstrate using explicit comparisons. Moreover, we show numerically that reducing this Fokker-Planck residual by adding it as an additional regularization term during training closes the gap between ODE- and SDE-induced distributions. Our experiments suggest that this regularization can improve the distribution generated by the ODE; however this can come at the cost of degraded SDE sample quality.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240503"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient flow-based modularity maximization for community detection in multiplex networks. 基于梯度流的多路网络社区检测模块化最大化。
IF 4.3 3区 综合性期刊
Kai Bergermann, Martin Stoll
{"title":"Gradient flow-based modularity maximization for community detection in multiplex networks.","authors":"Kai Bergermann, Martin Stoll","doi":"10.1098/rsta.2024.0244","DOIUrl":"https://doi.org/10.1098/rsta.2024.0244","url":null,"abstract":"<p><p>We propose two methods for the unsupervised detection of communities in undirected multiplex networks. These networks consist of multiple layers that record different relationships between the same entities or incorporate data from different sources. Both methods are formulated as gradient flows of suitable energy functionals: the first (MPBTV) builds on the minimization of a balanced total variation functional, which we show to be equivalent to multiplex modularity maximization, while the second (DGFM3) directly maximizes multiplex modularity. The resulting nonlinear matrix-valued ordinary differential equations (ODEs) are solved efficiently by a graph Merriman-Bence-Osher (MBO) scheme. Key to the efficiency is the approximate integration of the discrete linear differential operators by truncated eigendecompositions in the matrix exponential function. Numerical experiments on several real-world multiplex networks show that our methods are competitive with the state of the art with respect to various metrics. Their major benefit is a significant reduction of computational complexity leading to runtimes that are orders of magnitude faster for large multiplex networks.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240244"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equivariant geometric convolutions for dynamical systems on vector and tensor images. 矢量和张量图像上动力系统的等变几何卷积。
IF 4.3 3区 综合性期刊
Wilson G Gregory, David W Hogg, Ben Blum-Smith, Maria Teresa Arias, Kaze W K Wong, Soledad Villar
{"title":"Equivariant geometric convolutions for dynamical systems on vector and tensor images.","authors":"Wilson G Gregory, David W Hogg, Ben Blum-Smith, Maria Teresa Arias, Kaze W K Wong, Soledad Villar","doi":"10.1098/rsta.2024.0247","DOIUrl":"10.1098/rsta.2024.0247","url":null,"abstract":"<p><p>Machine learning methods are increasingly being employed as surrogate models in place of computationally expensive and slow numerical integrators for a bevy of applications in the natural sciences. However, while the laws of physics are relationships between scalars, vectors and tensors that hold regardless of the frame of reference or chosen coordinate system, surrogate machine learning models are not coordinate-free by default. We enforce coordinate freedom by using geometric convolutions in three model architectures: a ResNet, a Dilated ResNet and a UNet. In numerical experiments emulating two-dimensional compressible Navier-Stokes, we see better accuracy and improved stability compared with baseline surrogate models in almost all cases. The ease of enforcing coordinate freedom without making major changes to the model architecture provides an exciting recipe for any convolutional neural network-based method applied to an appropriate class of problems.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240247"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defending against diverse attacks in federated learning through consensus-based bi-level optimization. 通过基于共识的双层优化来防御联邦学习中的各种攻击。
IF 4.3 3区 综合性期刊
Nicolás García Trillos, Aditya Kumar Akash, Sixu Li, Konstantin Riedl, Yuhua Zhu
{"title":"Defending against diverse attacks in federated learning through consensus-based bi-level optimization.","authors":"Nicolás García Trillos, Aditya Kumar Akash, Sixu Li, Konstantin Riedl, Yuhua Zhu","doi":"10.1098/rsta.2024.0235","DOIUrl":"https://doi.org/10.1098/rsta.2024.0235","url":null,"abstract":"<p><p>Adversarial attacks pose significant challenges in many machine learning applications, particularly in the setting of distributed training and federated learning, where malicious agents seek to corrupt the training process with the goal of jeopardizing and compromising the performance and reliability of the final models. In this paper, we address the problem of robust federated learning in the presence of such attacks by formulating the training task as a bi-level optimization problem. We conduct a theoretical analysis of the resilience of consensus-based bi-level optimization (CB<sup>2</sup>O), an interacting multi-particle metaheuristic optimization method, in adversarial settings. Specifically, we provide a global convergence analysis of CB<sup>2</sup>O in mean-field law in the presence of malicious agents, demonstrating the robustness of CB<sup>2</sup>O against a diverse range of attacks. Thereby, we offer insights into how specific hyperparameter choices enable to mitigate adversarial effects. On the practical side, we extend CB<sup>2</sup>O to the clustered federated learning setting by proposing FedCB<sup>2</sup>O, a novel interacting multi-particle system, and design a practical algorithm that addresses the demands of real-world applications. Extensive experiments demonstrate the robustness of the FedCB<sup>2</sup>O algorithm against label-flipping attacks in decentralized clustered federated learning scenarios, showcasing its effectiveness in practical contexts.This article is part of the theme issue 'Partial differential equations in data science'.</p>","PeriodicalId":19879,"journal":{"name":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences","volume":"383 2298","pages":"20240235"},"PeriodicalIF":4.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144226166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信