arXiv - STAT - Machine Learning最新文献

筛选
英文 中文
Latent Space Score-based Diffusion Model for Probabilistic Multivariate Time Series Imputation 基于潜在空间分数的扩散模型用于概率多变量时间序列推算
arXiv - STAT - Machine Learning Pub Date : 2024-09-13 DOI: arxiv-2409.08917
Guojun Liang, Najmeh Abiri, Atiye Sadat Hashemi, Jens Lundström, Stefan Byttner, Prayag Tiwari
{"title":"Latent Space Score-based Diffusion Model for Probabilistic Multivariate Time Series Imputation","authors":"Guojun Liang, Najmeh Abiri, Atiye Sadat Hashemi, Jens Lundström, Stefan Byttner, Prayag Tiwari","doi":"arxiv-2409.08917","DOIUrl":"https://doi.org/arxiv-2409.08917","url":null,"abstract":"Accurate imputation is essential for the reliability and success of\u0000downstream tasks. Recently, diffusion models have attracted great attention in\u0000this field. However, these models neglect the latent distribution in a\u0000lower-dimensional space derived from the observed data, which limits the\u0000generative capacity of the diffusion model. Additionally, dealing with the\u0000original missing data without labels becomes particularly problematic. To\u0000address these issues, we propose the Latent Space Score-Based Diffusion Model\u0000(LSSDM) for probabilistic multivariate time series imputation. Observed values\u0000are projected onto low-dimensional latent space and coarse values of the\u0000missing data are reconstructed without knowing their ground truth values by\u0000this unsupervised learning approach. Finally, the reconstructed values are fed\u0000into a conditional diffusion model to obtain the precise imputed values of the\u0000time series. In this way, LSSDM not only possesses the power to identify the\u0000latent distribution but also seamlessly integrates the diffusion model to\u0000obtain the high-fidelity imputed values and assess the uncertainty of the\u0000dataset. Experimental results demonstrate that LSSDM achieves superior\u0000imputation performance while also providing a better explanation and\u0000uncertainty analysis of the imputation mechanism. The website of the code is\u0000textit{https://github.com/gorgen2020/LSSDM_imputation}.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning CausalBench 简介:用于因果分析和机器学习的灵活基准框架
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08419
Ahmet Kapkiç, Pratanu Mandal, Shu Wan, Paras Sheth, Abhinav Gorantla, Yoonhyuk Choi, Huan Liu, K. Selçuk Candan
{"title":"Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning","authors":"Ahmet Kapkiç, Pratanu Mandal, Shu Wan, Paras Sheth, Abhinav Gorantla, Yoonhyuk Choi, Huan Liu, K. Selçuk Candan","doi":"arxiv-2409.08419","DOIUrl":"https://doi.org/arxiv-2409.08419","url":null,"abstract":"While witnessing the exceptional success of machine learning (ML)\u0000technologies in many applications, users are starting to notice a critical\u0000shortcoming of ML: correlation is a poor substitute for causation. The\u0000conventional way to discover causal relationships is to use randomized\u0000controlled experiments (RCT); in many situations, however, these are\u0000impractical or sometimes unethical. Causal learning from observational data\u0000offers a promising alternative. While being relatively recent, causal learning\u0000aims to go far beyond conventional machine learning, yet several major\u0000challenges remain. Unfortunately, advances are hampered due to the lack of\u0000unified benchmark datasets, algorithms, metrics, and evaluation service\u0000interfaces for causal learning. In this paper, we introduce {em CausalBench},\u0000a transparent, fair, and easy-to-use evaluation platform, aiming to (a) enable\u0000the advancement of research in causal learning by facilitating scientific\u0000collaboration in novel algorithms, datasets, and metrics and (b) promote\u0000scientific objectivity, reproducibility, fairness, and awareness of bias in\u0000causal learning research. CausalBench provides services for benchmarking data,\u0000algorithms, models, and metrics, impacting the needs of a broad of scientific\u0000and engineering disciplines.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theoretical guarantees in KL for Diffusion Flow Matching 扩散流匹配的 KL 理论保证
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08311
Marta Gentiloni Silveri, Giovanni Conforti, Alain Durmus
{"title":"Theoretical guarantees in KL for Diffusion Flow Matching","authors":"Marta Gentiloni Silveri, Giovanni Conforti, Alain Durmus","doi":"arxiv-2409.08311","DOIUrl":"https://doi.org/arxiv-2409.08311","url":null,"abstract":"Flow Matching (FM) (also referred to as stochastic interpolants or rectified\u0000flows) stands out as a class of generative models that aims to bridge in finite\u0000time the target distribution $nu^star$ with an auxiliary distribution $mu$,\u0000leveraging a fixed coupling $pi$ and a bridge which can either be\u0000deterministic or stochastic. These two ingredients define a path measure which\u0000can then be approximated by learning the drift of its Markovian projection. The\u0000main contribution of this paper is to provide relatively mild assumptions on\u0000$nu^star$, $mu$ and $pi$ to obtain non-asymptotics guarantees for Diffusion\u0000Flow Matching (DFM) models using as bridge the conditional distribution\u0000associated with the Brownian motion. More precisely, we establish bounds on the\u0000Kullback-Leibler divergence between the target distribution and the one\u0000generated by such DFM models under moment conditions on the score of\u0000$nu^star$, $mu$ and $pi$, and a standard $L^2$-drift-approximation error\u0000assumption.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated One-Shot Ensemble Clustering 联合单次组合聚类
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08396
Rui Duan, Xin Xiong, Jueyi Liu, Katherine P. Liao, Tianxi Cai
{"title":"Federated One-Shot Ensemble Clustering","authors":"Rui Duan, Xin Xiong, Jueyi Liu, Katherine P. Liao, Tianxi Cai","doi":"arxiv-2409.08396","DOIUrl":"https://doi.org/arxiv-2409.08396","url":null,"abstract":"Cluster analysis across multiple institutions poses significant challenges\u0000due to data-sharing restrictions. To overcome these limitations, we introduce\u0000the Federated One-shot Ensemble Clustering (FONT) algorithm, a novel solution\u0000tailored for multi-site analyses under such constraints. FONT requires only a\u0000single round of communication between sites and ensures privacy by exchanging\u0000only fitted model parameters and class labels. The algorithm combines locally\u0000fitted clustering models into a data-adaptive ensemble, making it broadly\u0000applicable to various clustering techniques and robust to differences in\u0000cluster proportions across sites. Our theoretical analysis validates the\u0000effectiveness of the data-adaptive weights learned by FONT, and simulation\u0000studies demonstrate its superior performance compared to existing benchmark\u0000methods. We applied FONT to identify subgroups of patients with rheumatoid\u0000arthritis across two health systems, revealing improved consistency of patient\u0000clusters across sites, while locally fitted clusters proved less transferable.\u0000FONT is particularly well-suited for real-world applications with stringent\u0000communication and privacy constraints, offering a scalable and practical\u0000solution for multi-site clustering.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wasserstein Distributionally Robust Multiclass Support Vector Machine 瓦瑟斯坦分布式鲁棒多类支持向量机
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08409
Michael Ibrahim, Heraldo Rozas, Nagi Gebraeel
{"title":"Wasserstein Distributionally Robust Multiclass Support Vector Machine","authors":"Michael Ibrahim, Heraldo Rozas, Nagi Gebraeel","doi":"arxiv-2409.08409","DOIUrl":"https://doi.org/arxiv-2409.08409","url":null,"abstract":"We study the problem of multiclass classification for settings where data\u0000features $mathbf{x}$ and their labels $mathbf{y}$ are uncertain. We identify\u0000that distributionally robust one-vs-all (OVA) classifiers often struggle in\u0000settings with imbalanced data. To address this issue, we use Wasserstein\u0000distributionally robust optimization to develop a robust version of the\u0000multiclass support vector machine (SVM) characterized by the Crammer-Singer\u0000(CS) loss. First, we prove that the CS loss is bounded from above by a\u0000Lipschitz continuous function for all $mathbf{x} in mathcal{X}$ and\u0000$mathbf{y} in mathcal{Y}$, then we exploit strong duality results to express\u0000the dual of the worst-case risk problem, and we show that the worst-case risk\u0000minimization problem admits a tractable convex reformulation due to the\u0000regularity of the CS loss. Moreover, we develop a kernel version of our\u0000proposed model to account for nonlinear class separation, and we show that it\u0000admits a tractable convex upper bound. We also propose a projected subgradient\u0000method algorithm for a special case of our proposed linear model to improve\u0000scalability. Our numerical experiments demonstrate that our model outperforms\u0000state-of-the art OVA models in settings where the training data is highly\u0000imbalanced. We also show through experiments on popular real-world datasets\u0000that our proposed model often outperforms its regularized counterpart as the\u0000first accounts for uncertain labels unlike the latter.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localized Schrödinger Bridge Sampler 局部薛定谔桥采样器
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07968
Georg A. Gottwald, Sebastian Reich
{"title":"Localized Schrödinger Bridge Sampler","authors":"Georg A. Gottwald, Sebastian Reich","doi":"arxiv-2409.07968","DOIUrl":"https://doi.org/arxiv-2409.07968","url":null,"abstract":"We consider the generative problem of sampling from an unknown distribution\u0000for which only a sufficiently large number of training samples are available.\u0000In this paper, we build on previous work combining Schr\"odinger bridges and\u0000Langevin dynamics. A key bottleneck of this approach is the exponential\u0000dependence of the required training samples on the dimension, $d$, of the\u0000ambient state space. We propose a localization strategy which exploits\u0000conditional independence of conditional expectation values. Localization thus\u0000replaces a single high-dimensional Schr\"odinger bridge problem by $d$\u0000low-dimensional Schr\"odinger bridge problems over the available training\u0000samples. As for the original approach, the localized sampler is stable and\u0000geometric ergodic. The sampler also naturally extends to conditional sampling\u0000and to Bayesian inference. We demonstrate the performance of our proposed\u0000scheme through experiments on a Gaussian problem with increasing dimensions and\u0000on a stochastic subgrid-scale parametrization conditional sampling problem.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dataset-Free Weight-Initialization on Restricted Boltzmann Machine 受限玻尔兹曼机上的无数据集权重初始化
arXiv - STAT - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07708
Muneki Yasuda, Ryosuke Maeno, Chako Takahashi
{"title":"Dataset-Free Weight-Initialization on Restricted Boltzmann Machine","authors":"Muneki Yasuda, Ryosuke Maeno, Chako Takahashi","doi":"arxiv-2409.07708","DOIUrl":"https://doi.org/arxiv-2409.07708","url":null,"abstract":"In feed-forward neural networks, dataset-free weight-initialization method\u0000such as LeCun, Xavier (or Glorot), and He initializations have been developed.\u0000These methods randomly determine the initial values of weight parameters based\u0000on specific distributions (e.g., Gaussian or uniform distributions) without\u0000using training datasets. To the best of the authors' knowledge, such a\u0000dataset-free weight-initialization method is yet to be developed for restricted\u0000Boltzmann machines (RBMs), which are probabilistic neural networks consisting\u0000of two layers, In this study, we derive a dataset-free weight-initialization\u0000method for Bernoulli--Bernoulli RBMs based on a statistical mechanical\u0000analysis. In the proposed weight-initialization method, the weight parameters\u0000are drawn from a Gaussian distribution with zero mean. The standard deviation\u0000of the Gaussian distribution is optimized based on our hypothesis which is that\u0000a standard deviation providing a larger layer correlation (LC) between the two\u0000layers improves the learning efficiency. The expression of the LC is derived\u0000based on a statistical mechanical analysis. The optimal value of the standard\u0000deviation corresponds to the maximum point of the LC. The proposed\u0000weight-initialization method is identical to Xavier initialization in a\u0000specific case (i.e., in the case the sizes of the two layers are the same, the\u0000random variables of the layers are ${-1,1}$-binary, and all bias parameters\u0000are zero).","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic continued pretraining 合成持续预培训
arXiv - STAT - Machine Learning Pub Date : 2024-09-11 DOI: arxiv-2409.07431
Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candès, Tatsunori Hashimoto
{"title":"Synthetic continued pretraining","authors":"Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candès, Tatsunori Hashimoto","doi":"arxiv-2409.07431","DOIUrl":"https://doi.org/arxiv-2409.07431","url":null,"abstract":"Pretraining on large-scale, unstructured internet text has enabled language\u0000models to acquire a significant amount of world knowledge. However, this\u0000knowledge acquisition is data-inefficient -- to learn a given fact, models must\u0000be trained on hundreds to thousands of diverse representations of it. This\u0000poses a challenge when adapting a pretrained model to a small corpus of\u0000domain-specific documents, where each fact may appear rarely or only once. We\u0000propose to bridge this gap with synthetic continued pretraining: using the\u0000small domain-specific corpus to synthesize a large corpus more amenable to\u0000learning, and then performing continued pretraining on the synthesized corpus.\u0000We instantiate this proposal with EntiGraph, a synthetic data augmentation\u0000algorithm that extracts salient entities from the source documents and then\u0000generates diverse text by drawing connections between the sampled entities.\u0000Synthetic continued pretraining using EntiGraph enables a language model to\u0000answer questions and follow generic instructions related to the source\u0000documents without access to them. If instead, the source documents are\u0000available at inference time, we show that the knowledge acquired through our\u0000approach compounds with retrieval-augmented generation. To better understand\u0000these results, we build a simple mathematical model of EntiGraph, and show how\u0000synthetic data augmentation can \"rearrange\" knowledge to enable more\u0000data-efficient learning.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Model-Agnostic Detection of New Physics Using Data-Driven Signal Regions 利用数据驱动的信号区域实现新物理的模型诊断式探测
arXiv - STAT - Machine Learning Pub Date : 2024-09-11 DOI: arxiv-2409.06960
Soheun Yi, John Alison, Mikael Kuusela
{"title":"Toward Model-Agnostic Detection of New Physics Using Data-Driven Signal Regions","authors":"Soheun Yi, John Alison, Mikael Kuusela","doi":"arxiv-2409.06960","DOIUrl":"https://doi.org/arxiv-2409.06960","url":null,"abstract":"In the search for new particles in high-energy physics, it is crucial to\u0000select the Signal Region (SR) in such a way that it is enriched with signal\u0000events if they are present. While most existing search methods set the region\u0000relying on prior domain knowledge, it may be unavailable for a completely novel\u0000particle that falls outside the current scope of understanding. We address this\u0000issue by proposing a method built upon a model-agnostic but often realistic\u0000assumption about the localized topology of the signal events, in which they are\u0000concentrated in a certain area of the feature space. Considering the signal\u0000component as a localized high-frequency feature, our approach employs the\u0000notion of a low-pass filter. We define the SR as an area which is most affected\u0000when the observed events are smeared with additive random noise. We overcome\u0000challenges in density estimation in the high-dimensional feature space by\u0000learning the density ratio of events that potentially include a signal to the\u0000complementary observation of events that closely resemble the target events but\u0000are free of any signals. By applying our method to simulated $mathrm{HH}\u0000rightarrow 4b$ events, we demonstrate that the method can efficiently identify\u0000a data-driven SR in a high-dimensional feature space in which a high portion of\u0000signal events concentrate.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Practical Theory of Generalization in Selectivity Learning 选择性学习中的泛化实用理论
arXiv - STAT - Machine Learning Pub Date : 2024-09-11 DOI: arxiv-2409.07014
Peizhi Wu, Haoshu Xu, Ryan Marcus, Zachary G. Ives
{"title":"A Practical Theory of Generalization in Selectivity Learning","authors":"Peizhi Wu, Haoshu Xu, Ryan Marcus, Zachary G. Ives","doi":"arxiv-2409.07014","DOIUrl":"https://doi.org/arxiv-2409.07014","url":null,"abstract":"Query-driven machine learning models have emerged as a promising estimation\u0000technique for query selectivities. Yet, surprisingly little is known about the\u0000efficacy of these techniques from a theoretical perspective, as there exist\u0000substantial gaps between practical solutions and state-of-the-art (SOTA) theory\u0000based on the Probably Approximately Correct (PAC) learning framework. In this\u0000paper, we aim to bridge the gaps between theory and practice. First, we\u0000demonstrate that selectivity predictors induced by signed measures are\u0000learnable, which relaxes the reliance on probability measures in SOTA theory.\u0000More importantly, beyond the PAC learning framework (which only allows us to\u0000characterize how the model behaves when both training and test workloads are\u0000drawn from the same distribution), we establish, under mild assumptions, that\u0000selectivity predictors from this class exhibit favorable out-of-distribution\u0000(OOD) generalization error bounds. These theoretical advances provide us with a better understanding of both the\u0000in-distribution and OOD generalization capabilities of query-driven selectivity\u0000learning, and facilitate the design of two general strategies to improve OOD\u0000generalization for existing query-driven selectivity models. We empirically\u0000verify that our techniques help query-driven selectivity models generalize\u0000significantly better to OOD queries both in terms of prediction accuracy and\u0000query latency performance, while maintaining their superior in-distribution\u0000generalization performance.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信