arXiv - STAT - Machine Learning最新文献

筛选
英文 中文
Think Twice Before You Act: Improving Inverse Problem Solving With MCMC 三思而后行:利用 MCMC 改进逆向问题的解决
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08551
Yaxuan Zhu, Zehao Dou, Haoxin Zheng, Yasi Zhang, Ying Nian Wu, Ruiqi Gao
{"title":"Think Twice Before You Act: Improving Inverse Problem Solving With MCMC","authors":"Yaxuan Zhu, Zehao Dou, Haoxin Zheng, Yasi Zhang, Ying Nian Wu, Ruiqi Gao","doi":"arxiv-2409.08551","DOIUrl":"https://doi.org/arxiv-2409.08551","url":null,"abstract":"Recent studies demonstrate that diffusion models can serve as a strong prior\u0000for solving inverse problems. A prominent example is Diffusion Posterior\u0000Sampling (DPS), which approximates the posterior distribution of data given the\u0000measure using Tweedie's formula. Despite the merits of being versatile in\u0000solving various inverse problems without re-training, the performance of DPS is\u0000hindered by the fact that this posterior approximation can be inaccurate\u0000especially for high noise levels. Therefore, we propose textbf{D}iffusion\u0000textbf{P}osterior textbf{MC}MC (textbf{DPMC}), a novel inference algorithm\u0000based on Annealed MCMC to solve inverse problems with pretrained diffusion\u0000models. We define a series of intermediate distributions inspired by the\u0000approximated conditional distributions used by DPS. Through annealed MCMC\u0000sampling, we encourage the samples to follow each intermediate distribution\u0000more closely before moving to the next distribution at a lower noise level, and\u0000therefore reduce the accumulated error along the path. We test our algorithm in\u0000various inverse problems, including super resolution, Gaussian deblurring,\u0000motion deblurring, inpainting, and phase retrieval. Our algorithm outperforms\u0000DPS with less number of evaluations across nearly all tasks, and is competitive\u0000among existing approaches.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty Estimation by Density Aware Evidential Deep Learning 通过密度感知证据深度学习进行不确定性估计
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08754
Taeseong Yoon, Heeyoung Kim
{"title":"Uncertainty Estimation by Density Aware Evidential Deep Learning","authors":"Taeseong Yoon, Heeyoung Kim","doi":"arxiv-2409.08754","DOIUrl":"https://doi.org/arxiv-2409.08754","url":null,"abstract":"Evidential deep learning (EDL) has shown remarkable success in uncertainty\u0000estimation. However, there is still room for improvement, particularly in\u0000out-of-distribution (OOD) detection and classification tasks. The limited OOD\u0000detection performance of EDL arises from its inability to reflect the distance\u0000between the testing example and training data when quantifying uncertainty,\u0000while its limited classification performance stems from its parameterization of\u0000the concentration parameters. To address these limitations, we propose a novel\u0000method called Density Aware Evidential Deep Learning (DAEDL). DAEDL integrates\u0000the feature space density of the testing example with the output of EDL during\u0000the prediction stage, while using a novel parameterization that resolves the\u0000issues in the conventional parameterization. We prove that DAEDL enjoys a\u0000number of favorable theoretical properties. DAEDL demonstrates state-of-the-art\u0000performance across diverse downstream tasks related to uncertainty estimation\u0000and classification","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations 基于熵的水平集可视化不确定性建模测试与开发框架
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08445
Robert Sisneros, Tushar M. Athawale, David Pugmire, Kenneth Moreland
{"title":"An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations","authors":"Robert Sisneros, Tushar M. Athawale, David Pugmire, Kenneth Moreland","doi":"arxiv-2409.08445","DOIUrl":"https://doi.org/arxiv-2409.08445","url":null,"abstract":"We present a simple comparative framework for testing and developing\u0000uncertainty modeling in uncertain marching cubes implementations. The selection\u0000of a model to represent the probability distribution of uncertain values\u0000directly influences the memory use, run time, and accuracy of an uncertainty\u0000visualization algorithm. We use an entropy calculation directly on ensemble\u0000data to establish an expected result and then compare the entropy from various\u0000probability models, including uniform, Gaussian, histogram, and quantile\u0000models. Our results verify that models matching the distribution of the\u0000ensemble indeed match the entropy. We further show that fewer bins in\u0000nonparametric histogram models are more effective whereas large numbers of bins\u0000in quantile models approach data accuracy.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian Approach to Clustering via the Proper Bayesian Bootstrap: the Bayesian Bagged Clustering (BBC) algorithm 通过适当贝叶斯引导法进行聚类的贝叶斯方法:贝叶斯袋式聚类(BBC)算法
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08954
Federico Maria Quetti, Silvia Figini, Elena ballante
{"title":"A Bayesian Approach to Clustering via the Proper Bayesian Bootstrap: the Bayesian Bagged Clustering (BBC) algorithm","authors":"Federico Maria Quetti, Silvia Figini, Elena ballante","doi":"arxiv-2409.08954","DOIUrl":"https://doi.org/arxiv-2409.08954","url":null,"abstract":"The paper presents a novel approach for unsupervised techniques in the field\u0000of clustering. A new method is proposed to enhance existing literature models\u0000using the proper Bayesian bootstrap to improve results in terms of robustness\u0000and interpretability. Our approach is organized in two steps: k-means\u0000clustering is used for prior elicitation, then proper Bayesian bootstrap is\u0000applied as resampling method in an ensemble clustering approach. Results are\u0000analyzed introducing measures of uncertainty based on Shannon entropy. The\u0000proposal provides clear indication on the optimal number of clusters, as well\u0000as a better representation of the clustered data. Empirical results are\u0000provided on simulated data showing the methodological and empirical advances\u0000obtained.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal GNNs: A GNN-Driven Instrumental Variable Approach for Causal Inference in Networks 因果 GNN:网络中因果推理的 GNN 驱动型工具变量方法
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08544
Xiaojing Du, Feiyu Yang, Wentao Gao, Xiongren Chen
{"title":"Causal GNNs: A GNN-Driven Instrumental Variable Approach for Causal Inference in Networks","authors":"Xiaojing Du, Feiyu Yang, Wentao Gao, Xiongren Chen","doi":"arxiv-2409.08544","DOIUrl":"https://doi.org/arxiv-2409.08544","url":null,"abstract":"As network data applications continue to expand, causal inference within\u0000networks has garnered increasing attention. However, hidden confounders\u0000complicate the estimation of causal effects. Most methods rely on the strong\u0000ignorability assumption, which presumes the absence of hidden confounders-an\u0000assumption that is both difficult to validate and often unrealistic in\u0000practice. To address this issue, we propose CgNN, a novel approach that\u0000leverages network structure as instrumental variables (IVs), combined with\u0000graph neural networks (GNNs) and attention mechanisms, to mitigate hidden\u0000confounder bias and improve causal effect estimation. By utilizing network\u0000structure as IVs, we reduce confounder bias while preserving the correlation\u0000with treatment. Our integration of attention mechanisms enhances robustness and\u0000improves the identification of important nodes. Validated on two real-world\u0000datasets, our results demonstrate that CgNN effectively mitigates hidden\u0000confounder bias and offers a robust GNN-driven IV framework for causal\u0000inference in complex network data.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sub-graph Based Diffusion Model for Link Prediction 基于子图的链接预测扩散模型
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08487
Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang
{"title":"Sub-graph Based Diffusion Model for Link Prediction","authors":"Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang","doi":"arxiv-2409.08487","DOIUrl":"https://doi.org/arxiv-2409.08487","url":null,"abstract":"Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary\u0000class of generative models with exceptional qualities in both synthesis and\u0000maximizing the data likelihood. These models work by traversing a forward\u0000Markov Chain where data is perturbed, followed by a reverse process where a\u0000neural network learns to undo the perturbations and recover the original data.\u0000There have been increasing efforts exploring the applications of DDPMs in the\u0000graph domain. However, most of them have focused on the generative perspective.\u0000In this paper, we aim to build a novel generative model for link prediction. In\u0000particular, we treat link prediction between a pair of nodes as a conditional\u0000likelihood estimation of its enclosing sub-graph. With a dedicated design to\u0000decompose the likelihood estimation process via the Bayesian formula, we are\u0000able to separate the estimation of sub-graph structure and its node features.\u0000Such designs allow our model to simultaneously enjoy the advantages of\u0000inductive learning and the strong generalization capability. Remarkably,\u0000comprehensive experiments across various datasets validate that our proposed\u0000method presents numerous advantages: (1) transferability across datasets\u0000without retraining, (2) promising generalization on limited training data, and\u0000(3) robustness against graph adversarial attacks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch Ensemble for Variance Dependent Regret in Stochastic Bandits 随机匪帮中的方差依赖回退的批量合奏
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08570
Asaf CasselSchool of Computer Science, Tel Aviv University, Orin LevySchool of Computer Science, Tel Aviv University, Yishay MansourSchool of Computer Science, Tel Aviv UniversityGoogle Research, Tel Aviv
{"title":"Batch Ensemble for Variance Dependent Regret in Stochastic Bandits","authors":"Asaf CasselSchool of Computer Science, Tel Aviv University, Orin LevySchool of Computer Science, Tel Aviv University, Yishay MansourSchool of Computer Science, Tel Aviv UniversityGoogle Research, Tel Aviv","doi":"arxiv-2409.08570","DOIUrl":"https://doi.org/arxiv-2409.08570","url":null,"abstract":"Efficiently trading off exploration and exploitation is one of the key\u0000challenges in online Reinforcement Learning (RL). Most works achieve this by\u0000carefully estimating the model uncertainty and following the so-called\u0000optimistic model. Inspired by practical ensemble methods, in this work we\u0000propose a simple and novel batch ensemble scheme that provably achieves\u0000near-optimal regret for stochastic Multi-Armed Bandits (MAB). Crucially, our\u0000algorithm has just a single parameter, namely the number of batches, and its\u0000value does not depend on distributional properties such as the scale and\u0000variance of the losses. We complement our theoretical results by demonstrating\u0000the effectiveness of our algorithm on synthetic benchmarks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control 邻接匹配:用无记忆随机优化控制微调流动和扩散生成模型
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08861
Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen
{"title":"Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control","authors":"Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen","doi":"arxiv-2409.08861","DOIUrl":"https://doi.org/arxiv-2409.08861","url":null,"abstract":"Dynamical generative models that produce samples through an iterative\u0000process, such as Flow Matching and denoising diffusion models, have seen\u0000widespread use, but there has not been many theoretically-sound methods for\u0000improving these models with reward fine-tuning. In this work, we cast reward\u0000fine-tuning as stochastic optimal control (SOC). Critically, we prove that a\u0000very specific memoryless noise schedule must be enforced during fine-tuning, in\u0000order to account for the dependency between the noise variable and the\u0000generated samples. We also propose a new algorithm named Adjoint Matching which\u0000outperforms existing SOC algorithms, by casting SOC problems as a regression\u0000problem. We find that our approach significantly improves over existing methods\u0000for reward fine-tuning, achieving better consistency, realism, and\u0000generalization to unseen human preference reward models, while retaining sample\u0000diversity.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CHARM: Creating Halos with Auto-Regressive Multi-stage networks CHARM:利用自动回归多级网络创建光环
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.09124
Shivam Pandey, Chirag Modi, Benjamin D. Wandelt, Deaglan J. Bartlett, Adrian E. Bayer, Greg L. Bryan, Matthew Ho, Guilhem Lavaux, T. Lucas Makinen, Francisco Villaescusa-Navarro
{"title":"CHARM: Creating Halos with Auto-Regressive Multi-stage networks","authors":"Shivam Pandey, Chirag Modi, Benjamin D. Wandelt, Deaglan J. Bartlett, Adrian E. Bayer, Greg L. Bryan, Matthew Ho, Guilhem Lavaux, T. Lucas Makinen, Francisco Villaescusa-Navarro","doi":"arxiv-2409.09124","DOIUrl":"https://doi.org/arxiv-2409.09124","url":null,"abstract":"To maximize the amount of information extracted from cosmological datasets,\u0000simulations that accurately represent these observations are necessary.\u0000However, traditional simulations that evolve particles under gravity by\u0000estimating particle-particle interactions (N-body simulations) are\u0000computationally expensive and prohibitive to scale to the large volumes and\u0000resolutions necessary for the upcoming datasets. Moreover, modeling the\u0000distribution of galaxies typically involves identifying virialized dark matter\u0000halos, which is also a time- and memory-consuming process for large N-body\u0000simulations, further exacerbating the computational cost. In this study, we\u0000introduce CHARM, a novel method for creating mock halo catalogs by matching the\u0000spatial, mass, and velocity statistics of halos directly from the large-scale\u0000distribution of the dark matter density field. We develop multi-stage neural\u0000spline flow-based networks to learn this mapping at redshift z=0.5 directly\u0000with computationally cheaper low-resolution particle mesh simulations instead\u0000of relying on the high-resolution N-body simulations. We show that the mock\u0000halo catalogs and painted galaxy catalogs have the same statistical properties\u0000as obtained from $N$-body simulations in both real space and redshift space.\u0000Finally, we use these mock catalogs for cosmological inference using\u0000redshift-space galaxy power spectrum, bispectrum, and wavelet-based statistics\u0000using simulation-based inference, performing the first inference with\u0000accelerated forward model simulations and finding unbiased cosmological\u0000constraints with well-calibrated posteriors. The code was developed as part of\u0000the Simons Collaboration on Learning the Universe and is publicly available at\u0000url{https://github.com/shivampcosmo/CHARM}.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fair CoVariance Neural Networks 公平共方差神经网络
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08558
Andrea Cavallo, Madeline Navarro, Santiago Segarra, Elvin Isufi
{"title":"Fair CoVariance Neural Networks","authors":"Andrea Cavallo, Madeline Navarro, Santiago Segarra, Elvin Isufi","doi":"arxiv-2409.08558","DOIUrl":"https://doi.org/arxiv-2409.08558","url":null,"abstract":"Covariance-based data processing is widespread across signal processing and\u0000machine learning applications due to its ability to model data\u0000interconnectivities and dependencies. However, harmful biases in the data may\u0000become encoded in the sample covariance matrix and cause data-driven methods to\u0000treat different subpopulations unfairly. Existing works such as fair principal\u0000component analysis (PCA) mitigate these effects, but remain unstable in low\u0000sample regimes, which in turn may jeopardize the fairness goal. To address both\u0000biases and instability, we propose Fair coVariance Neural Networks (FVNNs),\u0000which perform graph convolutions on the covariance matrix for both fair and\u0000accurate predictions. Our FVNNs provide a flexible model compatible with\u0000several existing bias mitigation techniques. In particular, FVNNs allow for\u0000mitigating the bias in two ways: first, they operate on fair covariance\u0000estimates that remove biases from their principal components; second, they are\u0000trained in an end-to-end fashion via a fairness regularizer in the loss\u0000function so that the model parameters are tailored to solve the task directly\u0000in a fair manner. We prove that FVNNs are intrinsically fairer than analogous\u0000PCA approaches thanks to their stability in low sample regimes. We validate the\u0000robustness and fairness of our model on synthetic and real-world data,\u0000showcasing the flexibility of FVNNs along with the tradeoff between fair and\u0000accurate performance.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信