Journal of Machine Learning Research最新文献

筛选
英文 中文
Generalized Matrix Factorization: efficient algorithms for fitting generalized linear latent variable models to large data arrays. 广义矩阵因式分解:为大型数据阵列拟合广义线性潜变量模型的高效算法。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-11-01
Łukasz Kidziński, Francis K C Hui, David I Warton, Trevor Hastie
{"title":"Generalized Matrix Factorization: efficient algorithms for fitting generalized linear latent variable models to large data arrays.","authors":"Łukasz Kidziński, Francis K C Hui, David I Warton, Trevor Hastie","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Unmeasured or latent variables are often the cause of correlations between multivariate measurements, which are studied in a variety of fields such as psychology, ecology, and medicine. For Gaussian measurements, there are classical tools such as factor analysis or principal component analysis with a well-established theory and fast algorithms. Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses. However, current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets with thousands of observational units or responses. In this article, we propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood and then using a Newton method and Fisher scoring to learn the model parameters. Computationally, our method is noticeably faster and more stable, enabling GLLVM fits to much larger matrices than previously possible. We apply our method on a dataset of 48,000 observational units with over 2,000 observed species in each unit and find that most of the variability can be explained with a handful of factors. We publish an easy-to-use implementation of our proposed fitting algorithm.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129058/pdf/nihms-1843577.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9391635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tree-based Node Aggregation in Sparse Graphical Models. 稀疏图形模型中基于树的节点聚合
IF 4.3 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-09-01
Ines Wilms, Jacob Bien
{"title":"Tree-based Node Aggregation in Sparse Graphical Models.","authors":"Ines Wilms, Jacob Bien","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>High-dimensional graphical models are often estimated using regularization that is aimed at reducing the number of edges in a network. In this work, we show how even simpler networks can be produced by aggregating the nodes of the graphical model. We develop a new convex regularized method, called the <i>tree-aggregated graphical lasso</i> or tag-lasso, that estimates graphical models that are both edge-sparse and node-aggregated. The aggregation is performed in a data-driven fashion by leveraging side information in the form of a tree that encodes node similarity and facilitates the interpretation of the resulting aggregated nodes. We provide an efficient implementation of the tag-lasso by using the locally adaptive alternating direction method of multipliers and illustrate our proposal's practical advantages in simulation and in applications in finance and biology.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10805464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning Algorithm for Mixed Mean Field Control Games 混合平均场控制博弈的强化学习算法
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-05-04 DOI: 10.4208/jml.220915
Andrea Angiuli, Nils Detering, J. Fouque, M. Laurière, Jimin Lin
{"title":"Reinforcement Learning Algorithm for Mixed Mean Field Control Games","authors":"Andrea Angiuli, Nils Detering, J. Fouque, M. Laurière, Jimin Lin","doi":"10.4208/jml.220915","DOIUrl":"https://doi.org/10.4208/jml.220915","url":null,"abstract":"We present a new combined textit{mean field control game} (MFCG) problem which can be interpreted as a competitive game between collaborating groups and its solution as a Nash equilibrium between groups. Players coordinate their strategies within each group. An example is a modification of the classical trader's problem. Groups of traders maximize their wealth. They face cost for their transactions, for their own terminal positions, and for the average holding within their group. The asset price is impacted by the trades of all agents. We propose a three-timescale reinforcement learning algorithm to approximate the solution of such MFCG problems. We test the algorithm on benchmark linear-quadratic specifications for which we provide analytic solutions.","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89925607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Beyond the Quadratic Approximation: The Multiscale Structure of Neural Network Loss Landscapes 超越二次逼近:神经网络损失景观的多尺度结构
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-04-24 DOI: 10.4208/jml.220404
Chao Ma, D. Kunin, Lei Wu, Lexing Ying
{"title":"Beyond the Quadratic Approximation: The Multiscale Structure of Neural Network Loss Landscapes","authors":"Chao Ma, D. Kunin, Lei Wu, Lexing Ying","doi":"10.4208/jml.220404","DOIUrl":"https://doi.org/10.4208/jml.220404","url":null,"abstract":"A quadratic approximation of neural network loss landscapes has been extensively used to study the optimization process of these networks. Though, it usually holds in a very small neighborhood of the minimum, it cannot explain many phenomena observed during the optimization process. In this work, we study the structure of neural network loss functions and its implication on optimization in a region beyond the reach of a good quadratic approximation. Numerically, we observe that neural network loss functions possesses a multiscale structure, manifested in two ways: (1) in a neighborhood of minima, the loss mixes a continuum of scales and grows subquadratically, and (2) in a larger region, the loss shows several separate scales clearly. Using the subquadratic growth, we are able to explain the Edge of Stability phenomenon [5] observed for the gradient descent (GD) method. Using the separate scales, we explain the working mechanism of learning rate decay by simple examples. Finally, we study the origin of the multiscale structure and propose that the non-convexity of the models and the non-uniformity of training data is one of the causes. By constructing a two-layer neural network problem we show that training data with different magnitudes give rise to different scales of the loss function, producing subquadratic growth and multiple separate scales.","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88018799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Tree-Values: Selective Inference for Regression Trees. 树值:回归树的选择性推理
IF 4.3 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Anna C Neufeld, Lucy L Gao, Daniela M Witten
{"title":"Tree-Values: Selective Inference for Regression Trees.","authors":"Anna C Neufeld, Lucy L Gao, Daniela M Witten","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We consider conducting inference on the output of the Classification and Regression Tree (CART) (Breiman et al., 1984) algorithm. A naive approach to inference that does not account for the fact that the tree was estimated from the data will not achieve standard guarantees, such as Type 1 error rate control and nominal coverage. Thus, we propose a selective inference framework for conducting inference on a fitted CART tree. In a nutshell, we condition on the fact that the tree was estimated from the data. We propose a test for the difference in the mean response between a pair of terminal nodes that controls the selective Type 1 error rate, and a confidence interval for the mean response within a single terminal node that attains the nominal selective coverage. Efficient algorithms for computing the necessary conditioning sets are provided. We apply these methods in simulation and to a dataset involving the association between portion control interventions and caloric intake.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10933572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140121229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extensions to the Proximal Distance Method of Constrained Optimization. 约束优化近距离法的扩展。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange
{"title":"Extensions to the Proximal Distance Method of Constrained Optimization.","authors":"Alfonso Landeros,&nbsp;Oscar Hernan Madrid Padilla,&nbsp;Hua Zhou,&nbsp;Kenneth Lange","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The current paper studies the problem of minimizing a loss <i>f</i>(<b><i>x</i></b>) subject to constraints of the form <b><i>Dx</i></b> ∈ <i>S</i>, where <i>S</i> is a closed set, convex or not, and <i><b>D</b></i> is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><mtext>dist</mtext><msup><mrow><mo>(</mo><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>,</mo><mi>S</mi><mo>)</mo></mrow><mn>2</mn></msup></mrow></math> involving large tuning constants <i>ρ</i> and the squared Euclidean distance of <b><i>Dx</i></b> from <i>S</i>. The next iterate <b><i>x</i></b><sub><i>n</i>+1</sub> of the corresponding proximal distance algorithm is constructed from the current iterate <b><i>x</i></b><sub><i>n</i></sub> by minimizing the majorizing surrogate function <math><mrow><mi>f</mi><mo>(</mo><mstyle><mi>x</mi></mstyle><mo>)</mo><mo>+</mo><mfrac><mi>ρ</mi><mn>2</mn></mfrac><msup><mrow><mrow><mo>‖</mo><mrow><mstyle><mi>D</mi><mi>x</mi></mstyle><mo>-</mo><msub><mi>𝒫</mi><mi>S</mi></msub><mrow><mo>(</mo><mrow><mstyle><mi>D</mi></mstyle><msub><mstyle><mi>x</mi></mstyle><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>‖</mo></mrow></mrow><mn>2</mn></msup></mrow></math>. For fixed <i>ρ</i> and a subanalytic loss <i>f</i>(<b><i>x</i></b>) and a subanalytic constraint set <i>S</i>, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9875590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks. 相关的重要性:多个网络联合频谱推断中的依赖性影响。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Konstantinos Pantazis, Avanti Athreya, Jesús Arroyo, William N Frost, Evan S Hill, Vince Lyzinski
{"title":"The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks.","authors":"Konstantinos Pantazis, Avanti Athreya, Jesús Arroyo, William N Frost, Evan S Hill, Vince Lyzinski","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Spectral inference on multiple networks is a rapidly-developing subfield of graph statistics. Recent work has demonstrated that joint, or simultaneous, spectral embedding of multiple independent networks can deliver more accurate estimation than individual spectral decompositions of those same networks. Such inference procedures typically rely heavily on independence assumptions across the multiple network realizations, and even in this case, little attention has been paid to the induced network correlation that can be a consequence of such joint embeddings. In this paper, we present a <i>generalized omnibus</i> embedding methodology and we provide a detailed analysis of this embedding across both independent and correlated networks, the latter of which significantly extends the reach of such procedures, and we describe how this omnibus embedding can itself induce correlation. This leads us to distinguish between <i>inherent</i> correlation-that is, the correlation that arises naturally in multisample network data-and <i>induced</i> correlation, which is an artifice of the joint embedding methodology. We show that the generalized omnibus embedding procedure is flexible and robust, and we prove both consistency and a central limit theorem for the embedded points. We examine how induced and inherent correlation can impact inference for network time series data, and we provide network analogues of classical questions such as the effective sample size for more generally correlated data. Further, we show how an appropriately calibrated generalized omnibus embedding can detect changes in real biological networks that previous embedding procedures could not discern, confirming that the effect of inherent and induced correlation can be subtle and transformative. By allowing for and deconstructing both forms of correlation, our methodology widens the scope of spectral techniques for network inference, with import in theory and practice.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10465120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Sparse Additive Models. 广义稀疏加性模型。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Asad Haris, Noah Simon, Ali Shojaie
{"title":"Generalized Sparse Additive Models.","authors":"Asad Haris, Noah Simon, Ali Shojaie","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a unified framework for estimation and analysis of generalized additive models in high dimensions. The framework defines a large class of penalized regression estimators, encompassing many existing methods. An efficient computational algorithm for this class is presented that easily scales to thousands of observations and features. We prove minimax optimal convergence bounds for this class under a weak compatibility condition. In addition, we characterize the rate of convergence when this compatibility condition is not met. Finally, we also show that the optimal penalty parameters for structure and sparsity penalties in our framework are linked, allowing cross-validation to be conducted over only a single tuning parameter. We complement our theoretical results with empirical studies comparing some existing methods within this framework.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49693499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping. 先验自适应半监督学习在EHR表型中的应用。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai
{"title":"Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping.","authors":"Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Electronic Health Record (EHR) data, a rich source for biomedical research, have been successfully used to gain novel insight into a wide range of diseases. Despite its potential, EHR is currently underutilized for discovery research due to its major limitation in the lack of precise phenotype information. To overcome such difficulties, recent efforts have been devoted to developing supervised algorithms to accurately predict phenotypes based on relatively small training datasets with gold standard labels extracted via chart review. However, supervised methods typically require a sizable training set to yield generalizable algorithms, especially when the number of candidate features, <math><mi>p</mi></math>, is large. In this paper, we propose a semi-supervised (SS) EHR phenotyping method that borrows information from both a small, labeled dataset (where both the label <math><mi>Y</mi></math> and the feature set <math><mi>X</mi></math> are observed) and a much larger, weakly-labeled dataset in which the feature set <math><mi>X</mi></math> is accompanied only by a surrogate label <math><mi>S</mi></math> that is available to all patients. Under a <i>working</i> prior assumption that <math><mi>S</mi></math> is related to <math><mi>X</mi></math> only through <math><mi>Y</mi></math> and allowing it to hold <i>approximately</i>, we propose a prior adaptive semi-supervised (PASS) estimator that incorporates the prior knowledge by shrinking the estimator towards a direction derived under the prior. We derive asymptotic theory for the proposed estimator and justify its efficiency and robustness to prior information of poor quality. We also demonstrate its superiority over existing estimators under various scenarios via simulation studies and on three real-world EHR phenotyping studies at a large tertiary hospital.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10653017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation and inference on high-dimensional individualized treatment rule in observational data using split-and-pooled de-correlated score. 使用分割和池化去相关分数对观察数据中的高维个体化治疗规则进行估计和推断。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2022-01-01
Muxuan Liang, Young-Geun Choi, Yang Ning, Maureen A Smith, Ying-Qi Zhao
{"title":"Estimation and inference on high-dimensional individualized treatment rule in observational data using split-and-pooled de-correlated score.","authors":"Muxuan Liang, Young-Geun Choi, Yang Ning, Maureen A Smith, Ying-Qi Zhao","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>With the increasing adoption of electronic health records, there is an increasing interest in developing individualized treatment rules, which recommend treatments according to patients' characteristics, from large observational data. However, there is a lack of valid inference procedures for such rules developed from this type of data in the presence of high-dimensional covariates. In this work, we develop a penalized doubly robust method to estimate the optimal individualized treatment rule from high-dimensional data. We propose a split-and-pooled de-correlated score to construct hypothesis tests and confidence intervals. Our proposal adopts the data splitting to conquer the slow convergence rate of nuisance parameter estimations, such as non-parametric methods for outcome regression or propensity models. We establish the limiting distributions of the split-and-pooled de-correlated score test and the corresponding one-step estimator in high-dimensional setting. Simulation and real data analysis are conducted to demonstrate the superiority of the proposed method.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10720606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138811858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信