Information and Inference-A Journal of the Ima最新文献

筛选
英文 中文
Robust and resource efficient identification of shallow neural networks by fewest samples 基于最少样本的浅层神经网络鲁棒有效识别
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa036
Massimo Fornasier;Jan Vybíral;Ingrid Daubechies
{"title":"Robust and resource efficient identification of shallow neural networks by fewest samples","authors":"Massimo Fornasier;Jan Vybíral;Ingrid Daubechies","doi":"10.1093/imaiai/iaaa036","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa036","url":null,"abstract":"We address the structure identification and the uniform approximation of sums of ridge functions \u0000<tex>$f(x)=sum _{i=1}^m g_i(langle a_i,xrangle )$</tex>\u0000 on \u0000<tex>${mathbb{R}}^d$</tex>\u0000, representing a general form of a shallow feed-forward neural network, from a small number of query samples. Higher order differentiation, as used in our constructive approximations, of sums of ridge functions or of their compositions, as in deeper neural network, yields a natural connection between neural network weight identification and tensor product decomposition identification. In the case of the shallowest feed-forward neural network, second-order differentiation and tensors of order two (i.e., matrices) suffice as we prove in this paper. We use two sampling schemes to perform approximate differentiation—active sampling, where the sampling points are universal, actively and randomly designed, and passive sampling, where sampling points were preselected at random from a distribution with known density. Based on multiple gathered approximated first- and second-order differentials, our general approximation strategy is developed as a sequence of algorithms to perform individual sub-tasks. We first perform an active subspace search by approximating the span of the weight vectors \u0000<tex>$a_1,dots ,a_m$</tex>\u0000. Then we use a straightforward substitution, which reduces the dimensionality of the problem from \u0000<tex>$d$</tex>\u0000 to \u0000<tex>$m$</tex>\u0000. The core of the construction is then the stable and efficient approximation of weights expressed in terms of rank-\u0000<tex>$1$</tex>\u0000 matrices \u0000<tex>$a_i otimes a_i$</tex>\u0000, realized by formulating their individual identification as a suitable nonlinear program. We prove the successful identification by this program of weight vectors being close to orthonormal and we also show how we can constructively reduce to this case by a whitening procedure, without loss of any generality. We finally discuss the implementation and the performance of the proposed algorithmic pipeline with extensive numerical experiments, which illustrate and confirm the theoretical results.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Sensitivity of ℓ1 minimization to parameter choice 的灵敏度ℓ1参数选择的最小化
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa014
Aaron Berk;Yaniv Plan;Özgür Yilmaz
{"title":"Sensitivity of ℓ1 minimization to parameter choice","authors":"Aaron Berk;Yaniv Plan;Özgür Yilmaz","doi":"10.1093/imaiai/iaaa014","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa014","url":null,"abstract":"The use of generalized Lasso is a common technique for recovery of structured high-dimensional signals. There are three common formulations of generalized Lasso; each program has a governing parameter whose optimal value depends on properties of the data. At this optimal value, compressed sensing theory explains why Lasso programs recover structured high-dimensional signals with minimax order-optimal error. Unfortunately in practice, the optimal choice is generally unknown and must be estimated. Thus, we investigate stability of each of the three Lasso programs with respect to its governing parameter. Our goal is to aid the practitioner in answering the following question: given real data, which Lasso program should be used? We take a step towards answering this by analysing the case where the measurement matrix is identity (the so-called proximal denoising setup) and we use \u0000<tex>$ell _{1}$</tex>\u0000 regularization. For each Lasso program, we specify settings in which that program is provably unstable with respect to its governing parameter. We support our analysis with detailed numerical simulations. For example, there are settings where a 0.1% underestimate of a Lasso parameter can increase the error significantly and a 50% underestimate can cause the error to increase by a factor of \u0000<tex>$10^{9}$</tex>\u0000.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Super-resolution of near-colliding point sources 近碰撞点源的超分辨率
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa005
Dmitry Batenkov;Gil Goldman;Yosef Yomdin
{"title":"Super-resolution of near-colliding point sources","authors":"Dmitry Batenkov;Gil Goldman;Yosef Yomdin","doi":"10.1093/imaiai/iaaa005","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa005","url":null,"abstract":"We consider the problem of stable recovery of sparse signals of the form \u0000<tex>$$begin{equation*}F(x)=sum_{j=1}^d a_jdelta(x-x_j),quad x_jinmathbb{R},;a_jinmathbb{C}, end{equation*}$$</tex>\u0000 from their spectral measurements, known in a bandwidth \u0000<tex>$varOmega $</tex>\u0000 with absolute error not exceeding \u0000<tex>$epsilon&gt;0$</tex>\u0000. We consider the case when at most \u0000<tex>$pleqslant d$</tex>\u0000 nodes \u0000<tex>${x_j}$</tex>\u0000 of \u0000<tex>$F$</tex>\u0000 form a cluster whose extent is smaller than the Rayleigh limit \u0000<tex>${1over varOmega }$</tex>\u0000, while the rest of the nodes is well separated. Provided that \u0000<tex>$epsilon lessapprox operatorname{SRF}^{-2p+1}$</tex>\u0000, where \u0000<tex>$operatorname{SRF}=(varOmega varDelta )^{-1}$</tex>\u0000 and \u0000<tex>$varDelta $</tex>\u0000 is the minimal separation between the nodes, we show that the minimax error rate for reconstruction of the cluster nodes is of order \u0000<tex>${1over varOmega }operatorname{SRF}^{2p-1}epsilon $</tex>\u0000, while for recovering the corresponding amplitudes \u0000<tex>${a_j}$</tex>\u0000 the rate is of the order \u0000<tex>$operatorname{SRF}^{2p-1}epsilon $</tex>\u0000. Moreover, the corresponding minimax rates for the recovery of the non-clustered nodes and amplitudes are \u0000<tex>${epsilon over varOmega }$</tex>\u0000 and \u0000<tex>$epsilon $</tex>\u0000, respectively. These results suggest that stable super-resolution is possible in much more general situations than previously thought. Our numerical experiments show that the well-known matrix pencil method achieves the above accuracy bounds.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Low-rank matrix completion and denoising under Poisson noise 泊松噪声下的低秩矩阵补全与去噪
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa020
Andrew D McRae;Mark A Davenport
{"title":"Low-rank matrix completion and denoising under Poisson noise","authors":"Andrew D McRae;Mark A Davenport","doi":"10.1093/imaiai/iaaa020","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa020","url":null,"abstract":"This paper considers the problem of estimating a low-rank matrix from the observation of all or a subset of its entries in the presence of Poisson noise. When we observe all entries, this is a problem of matrix denoising; when we observe only a subset of the entries, this is a problem of matrix completion. In both cases, we exploit an assumption that the underlying matrix is low-rank. Specifically, we analyse several estimators, including a constrained nuclear-norm minimization program, nuclear-norm regularized least squares and a non-convex constrained low-rank optimization problem. We show that for all three estimators, with high probability, we have an upper error bound (in the Frobenius norm error metric) that depends on the matrix rank, the fraction of the elements observed and the maximal row and column sums of the true matrix. We furthermore show that the above results are minimax optimal (within a universal constant) in classes of matrices with low-rank and bounded row and column sums. We also extend these results to handle the case of matrix multinomial denoising and completion.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Mutual information for low-rank even-order symmetric tensor estimation 低秩偶阶对称张量估计的互信息
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-09-24 DOI: 10.1093/imaiai/iaaa022
Clément Luneau, Jean Barbier, N. Macris
{"title":"Mutual information for low-rank even-order symmetric tensor estimation","authors":"Clément Luneau, Jean Barbier, N. Macris","doi":"10.1093/imaiai/iaaa022","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa022","url":null,"abstract":"We consider a statistical model for finite-rank symmetric tensor factorization and prove a singleletter variational expression for its asymptotic mutual information when the tensor is of even order. The proof applies the adaptive interpolation method originally invented for rank-one factorization. Here we show how to extend the adaptive interpolation to finite-rank and even-order tensors. This requires new nontrivial ideas with respect to the current analysis in the literature. We also underline where the proof falls short when dealing with odd-order tensors.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77112189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Two-sample statistics based on anisotropic kernels. 基于各向异性核的双样本统计。
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-09-01 Epub Date: 2019-12-10 DOI: 10.1093/imaiai/iaz018
Xiuyuan Cheng, Alexander Cloninger, Ronald R Coifman
{"title":"Two-sample statistics based on anisotropic kernels.","authors":"Xiuyuan Cheng,&nbsp;Alexander Cloninger,&nbsp;Ronald R Coifman","doi":"10.1093/imaiai/iaz018","DOIUrl":"https://doi.org/10.1093/imaiai/iaz018","url":null,"abstract":"<p><p>The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely many multivariate samples. When the distributions are locally low-dimensional, the proposed test can be made more powerful to distinguish certain alternatives by incorporating local covariance matrices and constructing an anisotropic kernel. The kernel matrix is asymmetric; it computes the affinity between [Formula: see text] data points and a set of [Formula: see text] reference points, where [Formula: see text] can be drastically smaller than [Formula: see text]. While the proposed statistic can be viewed as a special class of Reproducing Kernel Hilbert Space MMD, the consistency of the test is proved, under mild assumptions of the kernel, as long as [Formula: see text], and a finite-sample lower bound of the testing power is obtained. Applications to flow cytometry and diffusion MRI datasets are demonstrated, which motivate the proposed approach to compare distributions.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaz018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38382429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Sparse confidence sets for normal mean models 正态均值模型的稀疏置信集
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-08-17 DOI: 10.1093/imaiai/iaad003
Y. Ning, Guang Cheng
{"title":"Sparse confidence sets for normal mean models","authors":"Y. Ning, Guang Cheng","doi":"10.1093/imaiai/iaad003","DOIUrl":"https://doi.org/10.1093/imaiai/iaad003","url":null,"abstract":"\u0000 In this paper, we propose a new framework to construct confidence sets for a $d$-dimensional unknown sparse parameter ${boldsymbol theta }$ under the normal mean model ${boldsymbol X}sim N({boldsymbol theta },sigma ^{2}bf{I})$. A key feature of the proposed confidence set is its capability to account for the sparsity of ${boldsymbol theta }$, thus named as sparse confidence set. This is in sharp contrast with the classical methods, such as the Bonferroni confidence intervals and other resampling-based procedures, where the sparsity of ${boldsymbol theta }$ is often ignored. Specifically, we require the desired sparse confidence set to satisfy the following two conditions: (i) uniformly over the parameter space, the coverage probability for ${boldsymbol theta }$ is above a pre-specified level; (ii) there exists a random subset $S$ of ${1,...,d}$ such that $S$ guarantees the pre-specified true negative rate for detecting non-zero $theta _{j}$’s. To exploit the sparsity of ${boldsymbol theta }$, we allow the confidence interval for $theta _{j}$ to degenerate to a single point 0 for any $jnotin S$. Under this new framework, we first consider whether there exist sparse confidence sets that satisfy the above two conditions. To address this question, we establish a non-asymptotic minimax lower bound for the non-coverage probability over a suitable class of sparse confidence sets. The lower bound deciphers the role of sparsity and minimum signal-to-noise ratio (SNR) in the construction of sparse confidence sets. Furthermore, under suitable conditions on the SNR, a two-stage procedure is proposed to construct a sparse confidence set. To evaluate the optimality, the proposed sparse confidence set is shown to attain a minimax lower bound of some properly defined risk function up to a constant factor. Finally, we develop an adaptive procedure to the unknown sparsity. Numerical studies are conducted to verify the theoretical results.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80096659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Strong uniform consistency with rates for kernel density estimators with general kernels on manifolds 流形上具有一般核的核密度估计与率的强一致相合
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-07-13 DOI: 10.1093/IMAIAI/IAAB014
Hau‐Tieng Wu, Nan Wu
{"title":"Strong uniform consistency with rates for kernel density estimators with general kernels on manifolds","authors":"Hau‐Tieng Wu, Nan Wu","doi":"10.1093/IMAIAI/IAAB014","DOIUrl":"https://doi.org/10.1093/IMAIAI/IAAB014","url":null,"abstract":"\u0000 When analyzing modern machine learning algorithms, we may need to handle kernel density estimation (KDE) with intricate kernels that are not designed by the user and might even be irregular and asymmetric. To handle this emerging challenge, we provide a strong uniform consistency result with the $L^infty $ convergence rate for KDE on Riemannian manifolds with Riemann integrable kernels (in the ambient Euclidean space). We also provide an $L^1$ consistency result for kernel density estimation on Riemannian manifolds with Lebesgue integrable kernels. The isotropic kernels considered in this paper are different from the kernels in the Vapnik–Chervonenkis class that are frequently considered in statistics society. We illustrate the difference when we apply them to estimate the probability density function. Moreover, we elaborate the delicate difference when the kernel is designed on the intrinsic manifold and on the ambient Euclidian space, both might be encountered in practice. At last, we prove the necessary and sufficient condition for an isotropic kernel to be Riemann integrable on a submanifold in the Euclidean space.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87811383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality 低有效维数函数无约束全局优化的降维技术
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-03-21 DOI: 10.1093/IMAIAI/IAAB011
C. Cartis, Adilet Otemissov
{"title":"A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality","authors":"C. Cartis, Adilet Otemissov","doi":"10.1093/IMAIAI/IAAB011","DOIUrl":"https://doi.org/10.1093/IMAIAI/IAAB011","url":null,"abstract":"\u0000 We investigate the unconstrained global optimization of functions with low effective dimensionality, which are constant along certain (unknown) linear subspaces. Extending the technique of random subspace embeddings in Wang et al. (2016, J. Artificial Intelligence Res., 55, 361–387), we study a generic Random Embeddings for Global Optimization (REGO) framework that is compatible with any global minimization algorithm. Instead of the original, potentially large-scale optimization problem, within REGO, a Gaussian random, low-dimensional problem with bound constraints is formulated and solved in a reduced space. We provide novel probabilistic bounds for the success of REGO in solving the original, low effective-dimensionality problem, which show its independence of the (potentially large) ambient dimension and its precise dependence on the dimensions of the effective and embedding subspaces. These results significantly improve existing theoretical analyses by providing the exact distribution of a reduced minimizer and its Euclidean norm and by the general assumptions required on the problem. We validate our theoretical findings by extensive numerical testing of REGO with three types of global optimization solvers, illustrating the improved scalability of REGO compared with the full-dimensional application of the respective solvers.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The information complexity of learning tasks, their structure and their distance 学习任务的信息复杂性、结构和距离
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2020-03-01 DOI: 10.1093/imaiai/iaaa033
Alessandro Achille;Giovanni Paolini;Glen Mbeng;Stefano Soatto
{"title":"The information complexity of learning tasks, their structure and their distance","authors":"Alessandro Achille;Giovanni Paolini;Glen Mbeng;Stefano Soatto","doi":"10.1093/imaiai/iaaa033","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa033","url":null,"abstract":"We introduce an asymmetric distance in the space of learning tasks and a framework to compute their complexity. These concepts are foundational for the practice of transfer learning, whereby a parametric model is pre-trained for a task, and then fine tuned for another. The framework we develop is non-asymptotic, captures the finite nature of the training dataset and allows distinguishing learning from memorization. It encompasses, as special cases, classical notions from Kolmogorov complexity and Shannon and Fisher information. However, unlike some of those frameworks, it can be applied to large-scale models and real-world datasets. Our framework is the first to measure complexity in a way that accounts for the effect of the optimization scheme, which is critical in deep learning.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa033","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信