arXiv - STAT - Computation最新文献

筛选
英文 中文
Model-Embedded Gaussian Process Regression for Parameter Estimation in Dynamical System 用于动态系统参数估计的模型嵌入式高斯过程回归
arXiv - STAT - Computation Pub Date : 2024-09-18 DOI: arxiv-2409.11745
Ying Zhou, Jinglai Li, Xiang Zhou, Hongqiao Wang
{"title":"Model-Embedded Gaussian Process Regression for Parameter Estimation in Dynamical System","authors":"Ying Zhou, Jinglai Li, Xiang Zhou, Hongqiao Wang","doi":"arxiv-2409.11745","DOIUrl":"https://doi.org/arxiv-2409.11745","url":null,"abstract":"Identifying dynamical system (DS) is a vital task in science and engineering.\u0000Traditional methods require numerous calls to the DS solver, rendering\u0000likelihood-based or least-squares inference frameworks impractical. For\u0000efficient parameter inference, two state-of-the-art techniques are the kernel\u0000method for modeling and the \"one-step framework\" for jointly inferring unknown\u0000parameters and hyperparameters. The kernel method is a quick and\u0000straightforward technique, but it cannot estimate solutions and their\u0000derivatives, which must strictly adhere to physical laws. We propose a\u0000model-embedded \"one-step\" Bayesian framework for joint inference of unknown\u0000parameters and hyperparameters by maximizing the marginal likelihood. This\u0000approach models the solution and its derivatives using Gaussian process\u0000regression (GPR), taking into account smoothness and continuity properties, and\u0000treats differential equations as constraints that can be naturally integrated\u0000into the Bayesian framework in the linear case. Additionally, we prove the\u0000convergence of the model-embedded Gaussian process regression (ME-GPR) for\u0000theoretical development. Motivated by Taylor expansion, we introduce a\u0000piecewise first-order linearization strategy to handle nonlinear dynamic\u0000systems. We derive estimates and confidence intervals, demonstrating that they\u0000exhibit low bias and good coverage properties for both simulated models and\u0000real data.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Approach to Gaussian Processes Implementation 实现高斯过程的稳健方法
arXiv - STAT - Computation Pub Date : 2024-09-17 DOI: arxiv-2409.11577
Juliette Mukangango, Amanda Muyskens, Benjamin W. Priest
{"title":"A Robust Approach to Gaussian Processes Implementation","authors":"Juliette Mukangango, Amanda Muyskens, Benjamin W. Priest","doi":"arxiv-2409.11577","DOIUrl":"https://doi.org/arxiv-2409.11577","url":null,"abstract":"Gaussian Process (GP) regression is a flexible modeling technique used to\u0000predict outputs and to capture uncertainty in the predictions. However, the GP\u0000regression process becomes computationally intensive when the training spatial\u0000dataset has a large number of observations. To address this challenge, we\u0000introduce a scalable GP algorithm, termed MuyGPs, which incorporates nearest\u0000neighbor and leave-one-out cross-validation during training. This approach\u0000enables the evaluation of large spatial datasets with state-of-the-art accuracy\u0000and speed in certain spatial problems. Despite these advantages, conventional\u0000quadratic loss functions used in the MuyGPs optimization such as Root Mean\u0000Squared Error(RMSE), are highly influenced by outliers. We explore the behavior\u0000of MuyGPs in cases involving outlying observations, and subsequently, develop a\u0000robust approach to handle and mitigate their impact. Specifically, we introduce\u0000a novel leave-one-out loss function based on the pseudo-Huber function (LOOPH)\u0000that effectively accounts for outliers in large spatial datasets within the\u0000MuyGPs framework. Our simulation study shows that the \"LOOPH\" loss method\u0000maintains accuracy despite outlying observations, establishing MuyGPs as a\u0000powerful tool for mitigating unusual observation impacts in the large data\u0000regime. In the analysis of U.S. ozone data, MuyGPs provides accurate\u0000predictions and uncertainty quantification, demonstrating its utility in\u0000managing data anomalies. Through these efforts, we advance the understanding of\u0000GP regression in spatial contexts.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of the entropy source on Monte Carlo simulations 熵源对蒙特卡罗模拟的影响
arXiv - STAT - Computation Pub Date : 2024-09-17 DOI: arxiv-2409.11539
Anton Lebedev, Annika Möslein, Olha I. Yaman, Del Rajan, Philip Intallura
{"title":"Effects of the entropy source on Monte Carlo simulations","authors":"Anton Lebedev, Annika Möslein, Olha I. Yaman, Del Rajan, Philip Intallura","doi":"arxiv-2409.11539","DOIUrl":"https://doi.org/arxiv-2409.11539","url":null,"abstract":"In this paper we show how different sources of random numbers influence the\u0000outcomes of Monte Carlo simulations. We compare industry-standard pseudo-random\u0000number generators (PRNGs) to a quantum random number generator (QRNG) and show,\u0000using examples of Monte Carlo simulations with exact solutions, that the QRNG\u0000yields statistically significantly better approximations than the PRNGs. Our\u0000results demonstrate that higher accuracy can be achieved in the commonly known\u0000Monte Carlo method for approximating $pi$. For Buffon's needle experiment, we\u0000further quantify a potential reduction in approximation errors by up to\u0000$1.89times$ for optimal parameter choices when using a QRNG and a reduction of\u0000the sample size by $sim 8times$ for sub-optimal parameter choices. We\u0000attribute the observed higher accuracy to the underlying differences in the\u0000random sampling, where a uniformity analysis reveals a tendency of the QRNG to\u0000sample the solution space more homogeneously. Additionally, we compare the\u0000results obtained with the QRNG and PRNG in solving the non-linear stochastic\u0000Schr\"odinger equation, benchmarked against the analytical solution. We observe\u0000higher accuracy of the approximations of the QRNG and demonstrate that\u0000equivalent results can be achieved at 1/3 to 1/10-th of the costs.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models HJ-采样器:利用汉密尔顿-雅可比 PDE 和基于分数的生成模型解决随机过程逆问题的贝叶斯采样器
arXiv - STAT - Computation Pub Date : 2024-09-15 DOI: arxiv-2409.09614
Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis
{"title":"HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models","authors":"Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis","doi":"arxiv-2409.09614","DOIUrl":"https://doi.org/arxiv-2409.09614","url":null,"abstract":"The interplay between stochastic processes and optimal control has been\u0000extensively explored in the literature. With the recent surge in the use of\u0000diffusion models, stochastic processes have increasingly been applied to sample\u0000generation. This paper builds on the log transform, known as the Cole-Hopf\u0000transform in Brownian motion contexts, and extends it within a more abstract\u0000framework that includes a linear operator. Within this framework, we found that\u0000the well-known relationship between the Cole-Hopf transform and optimal\u0000transport is a particular instance where the linear operator acts as the\u0000infinitesimal generator of a stochastic process. We also introduce a novel\u0000scenario where the linear operator is the adjoint of the generator, linking to\u0000Bayesian inference under specific initial and terminal conditions. Leveraging\u0000this theoretical foundation, we develop a new algorithm, named the HJ-sampler,\u0000for Bayesian inference for the inverse problem of a stochastic differential\u0000equation with given terminal observations. The HJ-sampler involves two stages:\u0000(1) solving the viscous Hamilton-Jacobi partial differential equations, and (2)\u0000sampling from the associated stochastic optimal control problem. Our proposed\u0000algorithm naturally allows for flexibility in selecting the numerical solver\u0000for viscous HJ PDEs. We introduce two variants of the solver: the\u0000Riccati-HJ-sampler, based on the Riccati method, and the SGM-HJ-sampler, which\u0000utilizes diffusion models. We demonstrate the effectiveness and flexibility of\u0000the proposed methods by applying them to solve Bayesian inverse problems\u0000involving various stochastic processes and prior distributions, including\u0000applications that address model misspecifications and quantifying model\u0000uncertainty.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Shape-Graph Complexity with Application to Classification of Retinal Blood Vessels and Neurons 降低形状图复杂性并应用于视网膜血管和神经元分类
arXiv - STAT - Computation Pub Date : 2024-09-13 DOI: arxiv-2409.09168
Benjamin Beaudett, Anuj Srivastava
{"title":"Reducing Shape-Graph Complexity with Application to Classification of Retinal Blood Vessels and Neurons","authors":"Benjamin Beaudett, Anuj Srivastava","doi":"arxiv-2409.09168","DOIUrl":"https://doi.org/arxiv-2409.09168","url":null,"abstract":"Shape graphs are complex geometrical structures commonly found in biological\u0000and anatomical systems. A shape graph is a collection of nodes, some connected\u0000by curvilinear edges with arbitrary shapes. Their high complexity stems from\u0000the large number of nodes and edges and the complex shapes of edges. With an\u0000eye for statistical analysis, one seeks low-complexity representations that\u0000retain as much of the global structures of the original shape graphs as\u0000possible. This paper develops a framework for reducing graph complexity using\u0000hierarchical clustering procedures that replace groups of nodes and edges with\u0000their simpler representatives. It demonstrates this framework using graphs of\u0000retinal blood vessels in two dimensions and neurons in three dimensions. The\u0000paper also presents experiments on classifications of shape graphs using\u0000progressively reduced levels of graph complexity. The accuracy of disease\u0000detection in retinal blood vessels drops quickly when the complexity is\u0000reduced, with accuracy loss particularly associated with discarding terminal\u0000edges. Accuracy in identifying neural cell types remains stable with complexity\u0000reduction.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Finite Elements via Interacting Particle Langevin Dynamics 通过相互作用粒子朗格文动力学实现统计有限元
arXiv - STAT - Computation Pub Date : 2024-09-11 DOI: arxiv-2409.07101
Alex Glyn-Davies, Connor Duffin, Ieva Kazlauskaite, Mark Girolami, Ö. Deniz Akyildiz
{"title":"Statistical Finite Elements via Interacting Particle Langevin Dynamics","authors":"Alex Glyn-Davies, Connor Duffin, Ieva Kazlauskaite, Mark Girolami, Ö. Deniz Akyildiz","doi":"arxiv-2409.07101","DOIUrl":"https://doi.org/arxiv-2409.07101","url":null,"abstract":"In this paper, we develop a class of interacting particle Langevin algorithms\u0000to solve inverse problems for partial differential equations (PDEs). In\u0000particular, we leverage the statistical finite elements (statFEM) formulation\u0000to obtain a finite-dimensional latent variable statistical model where the\u0000parameter is that of the (discretised) forward map and the latent variable is\u0000the statFEM solution of the PDE which is assumed to be partially observed. We\u0000then adapt a recently proposed expectation-maximisation like scheme,\u0000interacting particle Langevin algorithm (IPLA), for this problem and obtain a\u0000joint estimation procedure for the parameters and the latent variables. We\u0000consider three main examples: (i) estimating the forcing for linear Poisson\u0000PDE, (ii) estimating the forcing for nonlinear Poisson PDE, and (iii)\u0000estimating diffusivity for linear Poisson PDE. We provide computational\u0000complexity estimates for forcing estimation in the linear case. We also provide\u0000comprehensive numerical experiments and preconditioning strategies that\u0000significantly improve the performance, showing that the proposed class of\u0000methods can be the choice for parameter inference in PDE models.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph sub-sampling for divide-and-conquer algorithms in large networks 大型网络中分而治之算法的图子抽样
arXiv - STAT - Computation Pub Date : 2024-09-11 DOI: arxiv-2409.06994
Eric Yanchenko
{"title":"Graph sub-sampling for divide-and-conquer algorithms in large networks","authors":"Eric Yanchenko","doi":"arxiv-2409.06994","DOIUrl":"https://doi.org/arxiv-2409.06994","url":null,"abstract":"As networks continue to increase in size, current methods must be capable of\u0000handling large numbers of nodes and edges in order to be practically relevant.\u0000Instead of working directly with the entire (large) network, analyzing\u0000sub-networks has become a popular approach. Due to a network's inherent\u0000inter-connectedness, sub-sampling is not a trivial task. While this problem has\u0000gained attention in recent years, it has not received sufficient attention from\u0000the statistics community. In this work, we provide a thorough comparison of\u0000seven graph sub-sampling algorithms by applying them to divide-and-conquer\u0000algorithms for community structure and core-periphery (CP) structure. After\u0000discussing the various algorithms and sub-sampling routines, we derive\u0000theoretical results for the mis-classification rate of the divide-and-conquer\u0000algorithm for CP structure under various sub-sampling schemes. We then perform\u0000extensive experiments on both simulated and real-world data to compare the\u0000various methods. For the community detection task, we found that sampling nodes\u0000uniformly at random yields the best performance. For CP structure on the other\u0000hand, there was no single winner, but algorithms which sampled core nodes at a\u0000higher rate consistently outperformed other sampling routines, e.g., random\u0000edge sampling and random walk sampling. The varying performance of the sampling\u0000algorithms on different tasks demonstrates the importance of carefully\u0000selecting a sub-sampling routine for the specific application.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing VarLiNGAM for Scalable and Efficient Time Series Causal Discovery 优化 VarLiNGAM 以实现可扩展的高效时间序列因果关系发现
arXiv - STAT - Computation Pub Date : 2024-09-09 DOI: arxiv-2409.05500
Ziyang Jiao, Ce Guo, Wayne Luk
{"title":"Optimizing VarLiNGAM for Scalable and Efficient Time Series Causal Discovery","authors":"Ziyang Jiao, Ce Guo, Wayne Luk","doi":"arxiv-2409.05500","DOIUrl":"https://doi.org/arxiv-2409.05500","url":null,"abstract":"Causal discovery is designed to identify causal relationships in data, a task\u0000that has become increasingly complex due to the computational demands of\u0000traditional methods such as VarLiNGAM, which combines Vector Autoregressive\u0000Model with Linear Non-Gaussian Acyclic Model for time series data. This study is dedicated to optimising causal discovery specifically for time\u0000series data, which is common in practical applications. Time series causal\u0000discovery is particularly challenging due to the need to account for temporal\u0000dependencies and potential time lag effects. By designing a specialised dataset\u0000generator and reducing the computational complexity of the VarLiNGAM model from\u0000( O(m^3 cdot n) ) to ( O(m^3 + m^2 cdot n) ), this study significantly\u0000improves the feasibility of processing large datasets. The proposed methods\u0000have been validated on advanced computational platforms and tested across\u0000simulated, real-world, and large-scale datasets, showcasing enhanced efficiency\u0000and performance. The optimised algorithm achieved 7 to 13 times speedup\u0000compared with the original algorithm and around 4.5 times speedup compared with\u0000the GPU-accelerated version on large-scale datasets with feature sizes between\u0000200 and 400. Our methods aim to push the boundaries of current causal discovery\u0000capabilities, making them more robust, scalable, and applicable to real-world\u0000scenarios, thus facilitating breakthroughs in various fields such as healthcare\u0000and finance.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Linear Unbiased Estimate from Privatized Histograms 从私有化直方图得出最佳线性无偏估计值
arXiv - STAT - Computation Pub Date : 2024-09-06 DOI: arxiv-2409.04387
Jordan Awan, Adam Edwards, Paul Bartholomew, Andrew Sillers
{"title":"Best Linear Unbiased Estimate from Privatized Histograms","authors":"Jordan Awan, Adam Edwards, Paul Bartholomew, Andrew Sillers","doi":"arxiv-2409.04387","DOIUrl":"https://doi.org/arxiv-2409.04387","url":null,"abstract":"In differential privacy (DP) mechanisms, it can be beneficial to release\u0000\"redundant\" outputs, in the sense that a quantity can be estimated by combining\u0000different combinations of privatized values. Indeed, this structure is present\u0000in the DP 2020 Decennial Census products published by the U.S. Census Bureau.\u0000With this structure, the DP output can be improved by enforcing\u0000self-consistency (i.e., estimators obtained by combining different values\u0000result in the same estimate) and we show that the minimum variance processing\u0000is a linear projection. However, standard projection algorithms are too\u0000computationally expensive in terms of both memory and execution time for\u0000applications such as the Decennial Census. We propose the Scalable Efficient\u0000Algorithm for Best Linear Unbiased Estimate (SEA BLUE), based on a two step\u0000process of aggregation and differencing that 1) enforces self-consistency\u0000through a linear and unbiased procedure, 2) is computationally and memory\u0000efficient, 3) achieves the minimum variance solution under certain structural\u0000assumptions, and 4) is empirically shown to be robust to violations of these\u0000structural assumptions. We propose three methods of calculating confidence\u0000intervals from our estimates, under various assumptions. We apply SEA BLUE to\u0000two 2010 Census demonstration products, illustrating its scalability and\u0000validity.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian Optimization through Sequential Monte Carlo and Statistical Physics-Inspired Techniques 通过序列蒙特卡洛和统计物理学启发技术进行贝叶斯优化
arXiv - STAT - Computation Pub Date : 2024-09-04 DOI: arxiv-2409.03094
Anton Lebedev, Thomas Warford, M. Emre Şahin
{"title":"A Bayesian Optimization through Sequential Monte Carlo and Statistical Physics-Inspired Techniques","authors":"Anton Lebedev, Thomas Warford, M. Emre Şahin","doi":"arxiv-2409.03094","DOIUrl":"https://doi.org/arxiv-2409.03094","url":null,"abstract":"In this paper, we propose an approach for an application of Bayesian\u0000optimization using Sequential Monte Carlo (SMC) and concepts from the\u0000statistical physics of classical systems. Our method leverages the power of\u0000modern machine learning libraries such as NumPyro and JAX, allowing us to\u0000perform Bayesian optimization on multiple platforms, including CPUs, GPUs,\u0000TPUs, and in parallel. Our approach enables a low entry level for exploration\u0000of the methods while maintaining high performance. We present a promising\u0000direction for developing more efficient and effective techniques for a wide\u0000range of optimization problems in diverse fields.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信