Journal of Machine Learning Research最新文献

筛选
英文 中文
Dynamic Bayesian Learning for Spatiotemporal Mechanistic Models. 时空机制模型的动态贝叶斯学习。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Sudipto Banerjee, Xiang Chen, Ian Frankenburg, Daniel Zhou
{"title":"Dynamic Bayesian Learning for Spatiotemporal Mechanistic Models.","authors":"Sudipto Banerjee, Xiang Chen, Ian Frankenburg, Daniel Zhou","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We develop an approach for Bayesian learning of spatiotemporal dynamical mechanistic models. Such learning consists of statistical emulation of the mechanistic system that can efficiently interpolate the output of the system from arbitrary inputs. The emulated learner can then be used to train the system from noisy data achieved by melding information from observed data with the emulated mechanistic system. This joint melding of mechanistic systems employ hierarchical state-space models with Gaussian process regression. Assuming the dynamical system is controlled by a finite collection of inputs, Gaussian process regression learns the effect of these parameters through a number of training runs, driving the stochastic innovations of the spatiotemporal state-space component. This enables efficient modeling of the dynamics over space and time. This article details exact inference with analytically accessible posterior distributions in hierarchical matrix-variate Normal and Wishart models in designing the emulator. This step obviates expensive iterative algorithms such as Markov chain Monte Carlo or variational approximations. We also show how emulation is applicable to large-scale emulation by designing a dynamic Bayesian transfer learning framework. Inference on <math><mi>η</mi></math> proceeds using Markov chain Monte Carlo as a post-emulation step using the emulator as a regression component. We demonstrate this framework through solving inverse problems arising in the analysis of ordinary and partial nonlinear differential equations and, in addition, to a black-box computer model generating spatiotemporal dynamics across a graphical model.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676262/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Generative Models: Complexity, Dimensionality, and Approximation. 深度生成模型:复杂性、维度和近似。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Kevin Wang, Hongqian Niu, Yixin Wang, Didong Li
{"title":"Deep Generative Models: Complexity, Dimensionality, and Approximation.","authors":"Kevin Wang, Hongqian Niu, Yixin Wang, Didong Li","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Generative networks have shown remarkable success in learning complex data distributions, particularly in generating high-dimensional data from lower-dimensional inputs. While this capability is well-documented empirically, its theoretical underpinning remains unclear. One common theoretical explanation appeals to the widely accepted manifold hypothesis, which suggests that many real-world datasets, such as images and signals, often possess intrinsic low-dimensional geometric structures. Under this manifold hypothesis, it is widely believed that to approximate a distribution on a <math><mi>d</mi></math> -dimensional Riemannian manifold, the latent dimension needs to be at least <math><mi>d</mi></math> or <math><mi>d</mi> <mo>+</mo> <mn>1</mn></math> . In this work, we show that this requirement on the latent dimension is not necessary by demonstrating that generative networks can approximate distributions on <math><mi>d</mi></math> -dimensional Riemannian manifolds from inputs of any arbitrary dimension, even lower than <math><mi>d</mi></math> , taking inspiration from the concept of space-filling curves. This approach, in turn, leads to a super-exponential complexity bound of the deep neural networks through expanded neurons. Our findings thus challenge the conventional belief on the relationship between input dimensionality and the ability of generative networks to model data distributions. This novel insight not only corroborates the practical effectiveness of generative networks in handling complex data structures, but also underscores a critical trade-off between approximation error, dimensionality, and model complexity.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13021250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147576113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Multi-Group Gaussian Process Models for Heterogeneous Group-Structured Data. 异构组结构数据的贝叶斯多组高斯过程模型。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Didong Li, Andrew Jones, Sudipto Banerjee, Barbara Engelhardt
{"title":"Bayesian Multi-Group Gaussian Process Models for Heterogeneous Group-Structured Data.","authors":"Didong Li, Andrew Jones, Sudipto Banerjee, Barbara Engelhardt","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Gaussian processes are pervasive in functional data analysis, machine learning, and spatial statistics for modeling complex dependencies. Scientific data are often heterogeneous in their inputs and contain multiple known discrete groups of samples; thus, it is desirable to leverage the similarity among groups while accounting for heterogeneity across groups. We propose multi-group Gaussian processes (MGGPs) defined over <math> <msup><mrow><mi>R</mi></mrow> <mrow><mi>p</mi></mrow> </msup> <mo>×</mo> <mi>𝒞</mi></math> , where <math><mi>𝒞</mi></math> is a finite set representing the group label, by developing general classes of valid (positive definite) covariance functions on such domains. MGGPs are able to accurately recover relationships between the groups and efficiently share strength across samples from all groups during inference, while capturing distinct group-specific behaviors in the conditional posterior distributions. We demonstrate inference in MGGPs through simulation experiments, and we apply our proposed MGGP regression framework to gene expression data to illustrate the behavior and enhanced inferential capabilities of multi-group Gaussian processes by jointly modeling continuous and categorical variables.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12463451/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymptotic Inference for Multi-Stage Stationary Treatment Policy with Variable Selection. 具有变量选择的多阶段平稳处理策略的渐近推理。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Daiqi Gao, Yufeng Liu, Donglin Zeng
{"title":"Asymptotic Inference for Multi-Stage Stationary Treatment Policy with Variable Selection.","authors":"Daiqi Gao, Yufeng Liu, Donglin Zeng","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Dynamic treatment regimes or policies are a sequence of decision functions over multiple stages that are tailored to individual features. One important class of treatment policies in practice, namely multi-stage stationary treatment policies, prescribes treatment assignment probabilities using the same decision function across stages, where the decision is based on the same set of features consisting of time-evolving variables (e.g., routinely collected disease biomarkers). Although there has been extensive literature on constructing valid inference for the value function associated with dynamic treatment policies, little work has focused on the policies themselves, especially in the presence of high-dimensional features. We aim to fill the gap in this work. Specifically, we first obtain the multi-stage stationary treatment policy by minimizing the negative augmented inverse probability weighted estimator of the value function to increase asymptotic efficiency. An <i>L</i> <sub>1</sub> penalty is applied on the policy parameters to select important features. We then construct one-step improvements of the policy parameter estimators for valid inference. Theoretically, we show that the improved estimators are asymptotically normal, even if nuisance parameters are estimated at a slow convergence rate and the dimension of the features increases with the sample size. Our numerical studies demonstrate that the proposed method estimates a sparse policy with a near-optimal value function and conducts valid inference for the policy parameters.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987690/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147464271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Data Sketching for Varying Coefficient Regression Models. 变系数回归模型的贝叶斯数据草图。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Rajarshi Guhaniyogi, Laura Baracaldo, Sudipto Banerjee
{"title":"Bayesian Data Sketching for Varying Coefficient Regression Models.","authors":"Rajarshi Guhaniyogi, Laura Baracaldo, Sudipto Banerjee","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Varying coefficient models are popular for estimating nonlinear regression functions in functional data models. Their Bayesian variants have received limited attention in large data applications, primarily due to prohibitively slow posterior computations using Markov chain Monte Carlo (MCMC) algorithms. We introduce Bayesian data sketching for varying coefficient models to obviate computational challenges presented by large sample sizes. To address the challenges of analyzing large data, we compress the functional response vector and predictor matrix by a random linear transformation to achieve dimension reduction and conduct inference on the compressed data. Our approach distinguishes itself from several existing methods for analyzing large functional data in that it requires neither the development of new models or algorithms, nor any specialized computational hardware while delivering fully model-based Bayesian inference. Well-established methods and algorithms for varying coefficient regression models can be applied to the compressed data. We establish posterior contraction rates for estimating the varying coefficients and predicting the outcome at new locations with the randomly compressed data model. We use simulation experiments and analyze remote sensed vegetation data to empirically illustrate the inferential and computational efficiency of our approach.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12666391/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DisC2o-HD: Distributed causal inference with covariates shift for analyzing real-world high-dimensional data. 用于分析现实世界高维数据的协变量移位的分布式因果推理。
IF 4.3 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Jiayi Tong, Jie Hu, George Hripcsak, Yang Ning, Yong Chen
{"title":"DisC<sup>2</sup>o-HD: Distributed causal inference with covariates shift for analyzing real-world high-dimensional data.","authors":"Jiayi Tong, Jie Hu, George Hripcsak, Yang Ning, Yong Chen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>High-dimensional healthcare data, such as electronic health records (EHR) data and claims data, present two primary challenges due to the large number of variables and the need to consolidate data from multiple clinical sites. The third key challenge is the potential existence of heterogeneity in terms of covariate shift. In this paper, we propose a distributed learning algorithm accounting for covariate shift to estimate the average treatment effect (ATE) for high-dimensional data, named DisC<sup>2</sup>o-HD. Leveraging the surrogate likelihood method, our method calibrates the estimates of the propensity score and outcome models to approximately attain the desired covariate balancing property, while accounting for the covariate shift across multiple clinical sites. We show that our distributed covariate balancing propensity score estimator can approximate the pooled estimator, which is obtained by pooling the data from multiple sites together. The proposed estimator remains consistent if either the propensity score model or the outcome regression model is correctly specified. The semiparametric efficiency bound is achieved when both the propensity score and the outcome models are correctly specified. We conduct simulation studies to demonstrate the performance of the proposed algorithm; additionally, we apply the algorithm to a real-world data set to present the readiness of implementation and validity.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12269483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144660933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Robust Semi-supervised Estimation of Average Treatment Effect with Partially Annotated Treatment and Response. 具有部分注释处理和响应的平均处理效果的有效和鲁棒半监督估计。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Jue Hou, Rajarshi Mukherjee, Tianxi Cai
{"title":"Efficient and Robust Semi-supervised Estimation of Average Treatment Effect with Partially Annotated Treatment and Response.","authors":"Jue Hou, Rajarshi Mukherjee, Tianxi Cai","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A notable challenge of leveraging Electronic Health Records (EHR) for treatment effect assessment is the lack of precise information on important clinical variables, including the treatment received and the response. Both treatment information and response cannot be accurately captured by readily available EHR features in many studies and require labor-intensive manual chart review to precisely annotate, which limits the number of available gold standard labels on these key variables. We considered average treatment effect (ATE) estimation when 1) exact treatment and outcome variables are only observed together in a small labeled subset and 2) noisy surrogates of treatment and outcome, such as relevant prescription and diagnosis codes, along with potential confounders are observed for all subjects. We derived the efficient influence function for ATE and used it to construct a semi-supervised multiple machine learning (SMMAL) estimator. We justified that our SMMAL ATE estimator is semi-parametric efficient with B-spline regression under low-dimensional smooth models. We developed the adaptive sparsity/model doubly robust estimation under high-dimensional logistic propensity score and outcome regression models. Results from simulation studies demonstrated the validity of our SMMAL method and its superiority over supervised and unsupervised benchmarks. We applied SMMAL to the assessment of targeted therapies for metastatic colorectal cancer in comparison to chemotherapy.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12671556/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Sparse Gaussian Mixture Model for Clustering in High Dimensions. 高维聚类的贝叶斯稀疏高斯混合模型。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Dapeng Yao, Fangzheng Xie, Yanxun Xu
{"title":"Bayesian Sparse Gaussian Mixture Model for Clustering in High Dimensions.","authors":"Dapeng Yao, Fangzheng Xie, Yanxun Xu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We study the sparse high-dimensional Gaussian mixture model when the number of clusters is allowed to grow with the sample size. A minimax lower bound for parameter estimation is established, and we show that a constrained maximum likelihood estimator achieves the minimax lower bound. However, this optimization-based estimator is computationally intractable because the objective function is highly nonconvex and the feasible set involves discrete structures. To address the computational challenge, we propose a computationally tractable Bayesian approach to estimate high-dimensional Gaussian mixtures whose cluster centers exhibit sparsity using a continuous spike-and-slab prior. We further prove that the posterior contraction rate of the proposed Bayesian method is minimax optimal. The mis-clustering rate is obtained as a by-product using tools from matrix perturbation theory. The proposed Bayesian sparse Gaussian mixture model does not require pre-specifying the number of clusters, which can be adaptively estimated. The validity and usefulness of the proposed method is demonstrated through simulation studies and the analysis of a real-world single-cell RNA sequencing data set.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12965251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147379220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Robust Transfer Learning of Optimal Individualized Treatment Regimes with Right-Censored Survival Data. 基于right - censorship生存数据的最优个体化治疗方案的高效鲁棒迁移学习。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Pan Zhao, Julie Josse, Shu Yang
{"title":"Efficient and Robust Transfer Learning of Optimal Individualized Treatment Regimes with Right-Censored Survival Data.","authors":"Pan Zhao, Julie Josse, Shu Yang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>An individualized treatment regime (ITR) is a decision rule that assigns treatments based on patients' characteristics. The value function of an ITR is the expected outcome in a counterfactual world had this ITR been implemented. Recently, there has been increasing interest in combining heterogeneous data sources, such as leveraging the complementary features of randomized controlled trial (RCT) data and a large observational study (OS). Usually, a covariate shift exists between the source and target population, rendering the source-optimal ITR not optimal for the target population. We present an efficient and robust transfer learning framework for estimating the optimal ITR with right-censored survival data that generalizes well to the target population. The value function accommodates a broad class of functionals of survival distributions, including survival probabilities and restrictive mean survival times (RMSTs). We propose a doubly robust estimator of the value function, and the optimal ITR is learned by maximizing the value function within a pre-specified class of ITRs. We establish the cubic rate of convergence for the estimated parameter indexing the optimal ITR, and show that the proposed optimal value estimator is consistent and asymptotically normal even with flexible machine learning methods for nuisance parameter estimation. We evaluate the empirical performance of the proposed method by simulation studies and a real data application of sodium bicarbonate therapy for patients with severe metabolic acidaemia in the intensive care unit (ICU), combining a RCT and an observational study with heterogeneity.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12974684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directed Cyclic Graphs for Simultaneous Discovery of Time-Lagged and Instantaneous Causality from Longitudinal Data Using Instrumental Variables. 利用工具变量从纵向数据中同时发现时间滞后和瞬时因果关系的有向循环图。
IF 5.2 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2025-01-01
Wei Jin, Yang Ni, Amanda B Spence, Leah H Rubin, Yanxun Xu
{"title":"Directed Cyclic Graphs for Simultaneous Discovery of Time-Lagged and Instantaneous Causality from Longitudinal Data Using Instrumental Variables.","authors":"Wei Jin, Yang Ni, Amanda B Spence, Leah H Rubin, Yanxun Xu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We consider the problem of causal discovery from longitudinal observational data. We develop a novel framework that simultaneously discovers the time-lagged causality and the possibly cyclic instantaneous causality. Under common causal discovery assumptions, combined with additional instrumental information typically available in longitudinal data, we prove the proposed model is generally identifiable. To the best of our knowledge, this is the first causal identification theory for directed graphs with general cyclic patterns that achieves unique causal identifiability. Structural learning is carried out in a fully Bayesian fashion. Through extensive simulations and an application to the Women's Interagency HIV Study, we demonstrate the identifiability, utility, and superiority of the proposed model against state-of-the-art alternative methods.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"26 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12700356/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书