SIAM Journal on Optimization最新文献

筛选
英文 中文
Reducing Nonnegativity over General Semialgebraic Sets to Nonnegativity over Simple Sets 将广义半代数集合上的非负性简化为简单集合上的非负性
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-06-06 DOI: 10.1137/22m1501027
Olga Kuryatnikova, Juan C. Vera, Luis F. Zuluaga
{"title":"Reducing Nonnegativity over General Semialgebraic Sets to Nonnegativity over Simple Sets","authors":"Olga Kuryatnikova, Juan C. Vera, Luis F. Zuluaga","doi":"10.1137/22m1501027","DOIUrl":"https://doi.org/10.1137/22m1501027","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1970-2006, June 2024. <br/> Abstract. A nonnegativity certificate (NNC) is a way to write a polynomial so that its nonnegativity on a semialgebraic set becomes evident. Positivstellensätze (Psätze) guarantee the existence of NNCs. Both NNCs and Psätze underlie powerful algorithmic techniques for optimization. This paper proposes a universal approach to derive new Psätze for general semialgebraic sets from ones developed for simpler sets, such as a box, a simplex, or the nonnegative orthant. We provide several results illustrating the approach. First, by considering Handelman’s Positivstellensatz (Psatz) over a box, we construct non-SOS Schmüdgen-type Psätze over any compact semialgebraic set, that is, a family of Psätze that follow the structure of the fundamental Schmüdgen’s Psatz but where instead of SOS polynomials, any class of polynomials containing the nonnegative constants can be used, such as SONC, DSOS/SDSOS, hyperbolic, or sums of AM/GM polynomials. Second, by considering the simplex as the simple set, we derive a sparse Psatz over general compact sets which does not rely on any structural assumptions of the set. Finally, by considering Pólya’s Psatz over the nonnegative orthant, we derive a new non-SOS Psatz over unbounded sets which satisfy some generic conditions. All these results contribute to the literature regarding the use of non-SOS polynomials and sparse NNCs to derive Psätze over compact and unbounded sets. Throughout the article, we illustrate our results with relevant examples and numerical experiments.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First-Order Penalty Methods for Bilevel Optimization 双层优化的一阶惩罚方法
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-06-05 DOI: 10.1137/23m1566753
Zhaosong Lu, Sanyou Mei
{"title":"First-Order Penalty Methods for Bilevel Optimization","authors":"Zhaosong Lu, Sanyou Mei","doi":"10.1137/23m1566753","DOIUrl":"https://doi.org/10.1137/23m1566753","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1937-1969, June 2024. <br/> Abstract. In this paper, we study a class of unconstrained and constrained bilevel optimization problems in which the lower level is a possibly nonsmooth convex optimization problem, while the upper level is a possibly nonconvex optimization problem. We introduce a notion of [math]-KKT solution for them and show that an [math]-KKT solution leads to an [math]- or [math]-hypergradient–based stationary point under suitable assumptions. We also propose first-order penalty methods for finding an [math]-KKT solution of them, whose subproblems turn out to be a structured minimax problem and can be suitably solved by a first-order method recently developed by the authors. Under suitable assumptions, an operation complexity of [math] and [math], measured by their fundamental operations, is established for the proposed penalty methods for finding an [math]-KKT solution of the unconstrained and constrained bilevel optimization problems, respectively. Preliminary numerical results are presented to illustrate the performance of our proposed methods. To the best of our knowledge, this paper is the first work to demonstrate that bilevel optimization can be approximately solved as minimax optimization, and moreover, it provides the first implementable method with complexity guarantees for such sophisticated bilevel optimization.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"23 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time Consistency for Multistage Stochastic Optimization Problems under Constraints in Expectation 期望约束下多级随机优化问题的时间一致性
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-06-04 DOI: 10.1137/22m151830x
Pierre Carpentier, Jean-Philippe Chancelier, Michel De Lara
{"title":"Time Consistency for Multistage Stochastic Optimization Problems under Constraints in Expectation","authors":"Pierre Carpentier, Jean-Philippe Chancelier, Michel De Lara","doi":"10.1137/22m151830x","DOIUrl":"https://doi.org/10.1137/22m151830x","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1909-1936, June 2024. <br/> Abstract. We consider sequences—indexed by time (discrete stages)—of parametric families of multistage stochastic optimization problems; thus, at each time, the optimization problems in a family are parameterized by some quantities (initial states, constraint levels, and so on). In this framework, we introduce an adapted notion of parametric time-consistent optimal solutions: They are solutions that remain optimal after truncation of the past and that are optimal for any values of the parameters. We link this time consistency notion with the concept of state variable in Markov decision processes for a class of multistage stochastic optimization problems incorporating state constraints at the final time, formulated in expectation. For such problems, when the primitive noise random process is stagewise independent and takes a finite number of values, we show that time-consistent solutions can be obtained by considering a finite-dimensional state variable. We illustrate our results on a simple dam management problem.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Derivative-Free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems 一般非凸-凹最小问题的无衍生交替投影算法
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-30 DOI: 10.1137/23m1568168
Zi Xu, Ziqi Wang, Jingjing Shen, Yuhong Dai
{"title":"Derivative-Free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems","authors":"Zi Xu, Ziqi Wang, Jingjing Shen, Yuhong Dai","doi":"10.1137/23m1568168","DOIUrl":"https://doi.org/10.1137/23m1568168","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1879-1908, June 2024. <br/> Abstract. In this paper, we study zeroth-order algorithms for nonconvex-concave minimax problems, which have attracted much attention in machine learning, signal processing, and many other fields in recent years. We propose a zeroth-order alternating randomized gradient projection (ZO-AGP) algorithm for smooth nonconvex-concave minimax problems; its iteration complexity to obtain an [math]-stationary point is bounded by [math], and the number of function value estimates is bounded by [math] per iteration. Moreover, we propose a zeroth-order block alternating randomized proximal gradient algorithm (ZO-BAPG) for solving blockwise nonsmooth nonconvex-concave minimax optimization problems; its iteration complexity to obtain an [math]-stationary point is bounded by [math], and the number of function value estimates per iteration is bounded by [math]. To the best of our knowledge, this is the first time zeroth-order algorithms with iteration complexity guarantee are developed for solving both general smooth and blockwise nonsmooth nonconvex-concave minimax problems. Numerical results on the data poisoning attack problem and the distributed nonconvex sparse principal component analysis problem validate the efficiency of the proposed algorithms.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"64 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Difference-of-SOS and Difference-of-Convex-SOS Decompositions for Polynomials 论多项式的 SOS 差分和凸 SOS 差分分解
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-24 DOI: 10.1137/22m1495524
Yi-Shuai Niu, Hoai An Le Thi, Dinh Tao Pham
{"title":"On Difference-of-SOS and Difference-of-Convex-SOS Decompositions for Polynomials","authors":"Yi-Shuai Niu, Hoai An Le Thi, Dinh Tao Pham","doi":"10.1137/22m1495524","DOIUrl":"https://doi.org/10.1137/22m1495524","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1852-1878, June 2024. <br/> Abstract. In this article, we are interested in developing polynomial decomposition techniques based on sums-of-squares (SOS), namely the difference-of-sums-of-squares (D-SOS) and the difference-of-convex-sums-of-squares (DC-SOS). In particular, the DC-SOS decomposition is very useful for difference-of-convex (DC) programming formulation of polynomial optimization problems. First, we introduce the cone of convex-sums-of-squares (CSOS) polynomials and discuss its relationship to the sums-of-squares (SOS) polynomials, the non-negative polynomials, and the SOS-convex polynomials. Then we propose the set of D-SOS and DC-SOS polynomials and prove that any polynomial can be formulated as D-SOS and DC-SOS. The problem of finding D-SOS and DC-SOS decompositions can be formulated as a semi-definite program and solved for any desired precision in polynomial time using interior point methods. Some algebraic properties of CSOS, D-SOS, and DC-SOS are established. Second, we focus on establishing several practical algorithms for exact D-SOS and DC-SOS polynomial decompositions without solving any SDP. The numerical performance of the proposed D-SOS and DC-SOS decomposition algorithms and their parallel versions, tested on a dataset of 1750 randomly generated polynomials, is reported.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"54 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint Qualifications and Strong Global Convergence Properties of an Augmented Lagrangian Method on Riemannian Manifolds 黎曼曼曲面上的增量拉格朗日方法的约束条件和强全局收敛特性
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-20 DOI: 10.1137/23m1582382
Roberto Andreani, Kelvin R. Couto, Orizon P. Ferreira, Gabriel Haeser
{"title":"Constraint Qualifications and Strong Global Convergence Properties of an Augmented Lagrangian Method on Riemannian Manifolds","authors":"Roberto Andreani, Kelvin R. Couto, Orizon P. Ferreira, Gabriel Haeser","doi":"10.1137/23m1582382","DOIUrl":"https://doi.org/10.1137/23m1582382","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1799-1825, June 2024. <br/> Abstract. In the past several years, augmented Lagrangian methods have been successfully applied to several classes of nonconvex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have been conducted on optimization problems on manifolds, however only recently the treatment of the constrained case has been considered. In this paper we propose to bridge this gap with respect to the most recent developments in nonlinear programming. In particular, we formulate several well-known constraint qualifications from the Euclidean context which are sufficient for guaranteeing global convergence of augmented Lagrangian methods, without requiring boundedness of the set of Lagrange multipliers. Convergence of the dual sequence can also be assured under a weak constraint qualification. The theory presented is based on so-called sequential optimality conditions, which is a powerful tool used in this context. The paper can also be read with the Euclidean context in mind, serving as a review of the most relevant constraint qualifications and global convergence theory of state-of-the-art augmented Lagrangian methods for nonlinear programming.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"35 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pragmatic Distributionally Robust Optimization for Simple Integer Recourse Models 简单整数追索权模型的实用分布稳健优化
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-14 DOI: 10.1137/22m1523509
E. Ruben van Beesten, Ward Romeijnders, David P. Morton
{"title":"Pragmatic Distributionally Robust Optimization for Simple Integer Recourse Models","authors":"E. Ruben van Beesten, Ward Romeijnders, David P. Morton","doi":"10.1137/22m1523509","DOIUrl":"https://doi.org/10.1137/22m1523509","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1755-1783, June 2024. <br/> Abstract. Inspired by its success for their continuous counterparts, the standard approach to deal with mixed-integer recourse (MIR) models under distributional uncertainty is to use distributionally robust optimization (DRO). We argue, however, that this modeling choice is not always justified since DRO techniques are generally computationally challenging when integer decision variables are involved. That is why we propose an alternative approach for dealing with distributional uncertainty for the special case of simple integer recourse (SIR) models, which is aimed at obtaining models with improved computational tractability. We show that such models can be obtained by pragmatically selecting the uncertainty set. Here, we consider uncertainty sets based on the Wasserstein distance and also on generalized moment conditions. We compare our approach with standard DRO both numerically and theoretically. An important side result of our analysis is the derivation of performance guarantees for convex approximations of SIR models. In contrast to the literature, these error bounds are not only valid for a continuous distribution but hold for any distribution.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"43 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clarke’s Tangent Cones, Subgradients, Optimality Conditions, and the Lipschitzness at Infinity 克拉克切锥、子梯度、最优条件和无穷远处的唇边性
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-08 DOI: 10.1137/23m1545367
Minh Tùng Nguyễn, Tiến-Sơn Phạm
{"title":"Clarke’s Tangent Cones, Subgradients, Optimality Conditions, and the Lipschitzness at Infinity","authors":"Minh Tùng Nguyễn, Tiến-Sơn Phạm","doi":"10.1137/23m1545367","DOIUrl":"https://doi.org/10.1137/23m1545367","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1732-1754, June 2024. <br/> Abstract. We first study Clarke’s tangent cones at infinity to unbounded subsets of [math]. We prove that these cones are closed convex and show a characterization of their interiors. We then study subgradients at infinity for extended real value functions on [math] and derive necessary optimality conditions at infinity for optimization problems. We also give a number of rules for the computing of subgradients at infinity and provide some characterizations of the Lipschitz continuity at infinity for lower semicontinuous functions.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"59 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occupation Measure Relaxations in Variational Problems: The Role of Convexity 变分问题中的占位测量松弛:凸性的作用
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-07 DOI: 10.1137/23m1557088
Didier Henrion, Milan Korda, Martin Kruzik, Rodolfo Rios-Zertuche
{"title":"Occupation Measure Relaxations in Variational Problems: The Role of Convexity","authors":"Didier Henrion, Milan Korda, Martin Kruzik, Rodolfo Rios-Zertuche","doi":"10.1137/23m1557088","DOIUrl":"https://doi.org/10.1137/23m1557088","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1708-1731, June 2024. <br/> Abstract. This work addresses the occupation measure relaxation of calculus of variations problems, which is an infinite-dimensional linear programming reformulation amenable to numerical approximation by a hierarchy of semidefinite optimization problems. We address the problem of equivalence of this relaxation to the original problem. Our main result provides sufficient conditions for this equivalence. These conditions, revolving around the convexity of the data, are simple and apply in very general settings that may be of arbitrary dimensions and may include pointwise and integral constraints, thereby considerably strengthening the existing results. Our conditions are also extended to optimal control problems. In addition, we demonstrate how these results can be applied in nonconvex settings, showing that the occupation measure relaxation is at least as strong as the convexification using the convex envelope; in doing so, we prove that a certain weakening of the occupation measure relaxation is equivalent to the convex envelope. This opens the way to application of the occupation measure relaxation in situations where the convex envelope relaxation is known to be equivalent to the original problem, which includes problems in magnetism and elasticity.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"81 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140887343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Descent Augmented Lagrangian Method and Alternating Direction Method of Multipliers 双重后裔增量拉格朗日法和交替方向乘数法
IF 3.1 1区 数学
SIAM Journal on Optimization Pub Date : 2024-05-07 DOI: 10.1137/21m1449099
Kaizhao Sun, Xu Andy Sun
{"title":"Dual Descent Augmented Lagrangian Method and Alternating Direction Method of Multipliers","authors":"Kaizhao Sun, Xu Andy Sun","doi":"10.1137/21m1449099","DOIUrl":"https://doi.org/10.1137/21m1449099","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1679-1707, June 2024. <br/> Abstract. Classical primal-dual algorithms attempt to solve [math] by alternately minimizing over the primal variable [math] through primal descent and maximizing the dual variable [math] through dual ascent. However, when [math] is highly nonconvex with complex constraints in [math], the minimization over [math] may not achieve global optimality and, hence, the dual ascent step loses its valid intuition. This observation motivates us to propose a new class of primal-dual algorithms for nonconvex constrained optimization with the key feature to reverse dual ascent to a conceptually new dual descent, in a sense, elevating the dual variable to the same status as the primal variable. Surprisingly, this new dual scheme achieves some best iteration complexities for solving nonconvex optimization problems. In particular, when the dual descent step is scaled by a fractional constant, we name it scaled dual descent (SDD), otherwise, unscaled dual descent (UDD). For nonconvex multiblock optimization with nonlinear equality constraints, we propose SDD-alternating direction method of multipliers (SDD-ADMM) and show that it finds an [math]-stationary solution in [math] iterations. The complexity is further improved to [math] and [math] under proper conditions. We also propose UDD-augmented Lagrangian method (UDD-ALM), combining UDD with ALM, for weakly convex minimization over affine constraints. We show that UDD-ALM finds an [math]-stationary solution in [math] iterations. These complexity bounds for both algorithms either achieve or improve the best-known results in the ADMM and ALM literature. Moreover, SDD-ADMM addresses a long-standing limitation of existing ADMM frameworks.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":"63 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信