INFORMS journal on optimization最新文献

筛选
英文 中文
Comparison-Based Algorithms for One-Dimensional Stochastic Convex Optimization 基于比较的一维随机凸优化算法
INFORMS journal on optimization Pub Date : 2018-09-09 DOI: 10.1287/ijoo.2019.0022
Xi Chen, Qihang Lin, Zizhuo Wang
{"title":"Comparison-Based Algorithms for One-Dimensional Stochastic Convex Optimization","authors":"Xi Chen, Qihang Lin, Zizhuo Wang","doi":"10.1287/ijoo.2019.0022","DOIUrl":"https://doi.org/10.1287/ijoo.2019.0022","url":null,"abstract":"Stochastic optimization finds a wide range of applications in operations research and management science. However, existing stochastic optimization techniques usually require the information of random samples (e.g., demands in the newsvendor problem) or the objective values at the sampled points (e.g., the lost sales cost), which might not be available in practice. In this paper, we consider a new setup for stochastic optimization, in which the decision maker has access to only comparative information between a random sample and two chosen decision points in each iteration. We propose a comparison-based algorithm (CBA) to solve such problems in one dimension with convex objective functions. Particularly, the CBA properly chooses the two points in each iteration and constructs an unbiased gradient estimate for the original problem. We show that the CBA achieves the same convergence rate as the optimal stochastic gradient methods (with the samples observed). We also consider extensions of our approach to multi-dimensional quadratic problems as well as problems with non-convex objective functions. Numerical experiments show that the CBA performs well in test problems.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/ijoo.2019.0022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45063077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Ensemble Learning Framework for Model Fitting and Evaluation in Inverse Linear Optimization 逆线性优化中模型拟合与评价的集成学习框架
INFORMS journal on optimization Pub Date : 2018-04-12 DOI: 10.1287/IJOO.2019.0045
A. Babier, T. Chan, Taewoo Lee, Rafid Mahmood, Daria Terekhov
{"title":"An Ensemble Learning Framework for Model Fitting and Evaluation in Inverse Linear Optimization","authors":"A. Babier, T. Chan, Taewoo Lee, Rafid Mahmood, Daria Terekhov","doi":"10.1287/IJOO.2019.0045","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0045","url":null,"abstract":"We develop a generalized inverse optimization framework for fitting the cost vector of a single linear optimization problem given multiple observed decisions. This setting is motivated by ensemble learning, where building consensus from base learners can yield better predictions. We unify several models in the inverse optimization literature under a single framework and derive assumption-free and exact solution methods for each one. We extend a goodness-of-fit metric previously introduced for the problem with a single observed decision to this new setting and demonstrate several important properties. Finally, we demonstrate our framework in a novel inverse optimization-driven procedure for automated radiation therapy treatment planning. Here, the inverse optimization model leverages an ensemble of dose predictions from different machine learning models to construct a consensus treatment plan that outperforms baseline methods. The consensus plan yields better trade-offs between the competing clinical criteria used for plan evaluation.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43779014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Improved Linear Programs for Discrete Barycenters 离散Barycenter的改进线性规划
INFORMS journal on optimization Pub Date : 2018-03-30 DOI: 10.1287/ijoo.2019.0020
S. Borgwardt, Stephan Patterson
{"title":"Improved Linear Programs for Discrete Barycenters","authors":"S. Borgwardt, Stephan Patterson","doi":"10.1287/ijoo.2019.0020","DOIUrl":"https://doi.org/10.1287/ijoo.2019.0020","url":null,"abstract":"Discrete barycenters are the optimal solutions to mass transport problems for a set of discrete measures. They arise in applications of operations research and statistics. The best known algorithms are based on linear programming, but these programs scale exponentially in the number of measures, making them prohibitive for practical purposes. \u0000In this paper, we improve on these algorithms. First, by using the optimality conditions to restrict the search space, we provide a better linear program that reduces the number of variables dramatically. Second, we recall a proof method from the literature, which lends itself to a linear program that has not been considered for computations. We exhibit that this second formulation is a viable, and arguably the go-to approach, for data in general position. Third, we then combine the two programs into a single hybrid model that retains the best properties of both formulations for partially structured data. \u0000We then study the models through both a theoretical analysis and computational experiments. We consider both the hardness of constructing the models and their actual solution. In doing so, we exhibit that each of the improved linear programs becomes the best, go-to approach for data of different underlying structure.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/ijoo.2019.0020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45991523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Inexact Nonconvex Newton-Type Methods 非精确非凸牛顿型方法
INFORMS journal on optimization Pub Date : 2018-02-20 DOI: 10.1287/IJOO.2019.0043
Z. Yao, Peng Xu, Farbod Roosta-Khorasani, Michael W. Mahoney
{"title":"Inexact Nonconvex Newton-Type Methods","authors":"Z. Yao, Peng Xu, Farbod Roosta-Khorasani, Michael W. Mahoney","doi":"10.1287/IJOO.2019.0043","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0043","url":null,"abstract":"The paper aims to extend the theory and application of nonconvex Newton-type methods, namely trust region and cubic regularization, to the settings in which, in addition to the solution of subproblems, the gradient and the Hessian of the objective function are approximated. Using certain conditions on such approximations, the paper establishes optimal worst-case iteration complexities as the exact counterparts. This paper is part of a broader research program on designing, analyzing, and implementing efficient second-order optimization methods for large-scale machine learning applications. The authors were based at UC Berkeley when the idea of the project was conceived. The first two authors were PhD students, the third author was a postdoc, all supervised by the fourth author.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45998619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Stochastic Trust Region Algorithm Based on Careful Step Normalization 基于谨慎步进归一化的随机信任域算法
INFORMS journal on optimization Pub Date : 2017-12-29 DOI: 10.1287/IJOO.2018.0010
Frank E. Curtis, K. Scheinberg, R. Shi
{"title":"A Stochastic Trust Region Algorithm Based on Careful Step Normalization","authors":"Frank E. Curtis, K. Scheinberg, R. Shi","doi":"10.1287/IJOO.2018.0010","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0010","url":null,"abstract":"An algorithm is proposed for solving stochastic and finite-sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of t...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42046930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
“Relative Continuity” for Non-Lipschitz Nonsmooth Convex Optimization Using Stochastic (or Deterministic) Mirror Descent 基于随机(或确定性)镜像下降的非lipschitz非光滑凸优化的“相对连续性”
INFORMS journal on optimization Pub Date : 2017-10-12 DOI: 10.1287/IJOO.2018.0008
Haihao Lu
{"title":"“Relative Continuity” for Non-Lipschitz Nonsmooth Convex Optimization Using Stochastic (or Deterministic) Mirror Descent","authors":"Haihao Lu","doi":"10.1287/IJOO.2018.0008","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0008","url":null,"abstract":"The usual approach to developing and analyzing first-order methods for non-smooth (stochastic or deterministic) convex optimization assumes that the objective function is uniformly Lipschitz continuous with parameter $M_f$. However, in many settings the non-differentiable convex function $f(cdot)$ is not uniformly Lipschitz continuous -- for example (i) the classical support vector machine (SVM) problem, (ii) the problem of minimizing the maximum of convex quadratic functions, and even (iii) the univariate setting with $f(x) := max{0, x} + x^2$. Herein we develop a notion of \"relative continuity\" that is determined relative to a user-specified \"reference function\" $h(cdot)$ (that should be computationally tractable for algorithms), and we show that many non-differentiable convex functions are relatively continuous with respect to a correspondingly fairly-simple reference function $h(cdot)$. We also similarly develop a notion of \"relative stochastic continuity\" for the stochastic setting. We analysis two standard algorithms -- the (deterministic) mirror descent algorithm and the stochastic mirror descent algorithm -- for solving optimization problems in these two new settings, and we develop for the first time computational guarantees for instances where the objective function is not uniformly Lipschitz continuous. This paper is a companion paper for non-differentiable convex optimization to the recent paper by Lu, Freund, and Nesterov, which developed similar sorts of results for differentiable convex optimization.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46349114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Separable Convex Optimization with Nested Lower and Upper Constraints 具有嵌套上下约束的可分离凸优化
INFORMS journal on optimization Pub Date : 2017-03-04 DOI: 10.1287/IJOO.2018.0004
Thibaut Vidal, Daniel Gribel, Patrick Jaillet
{"title":"Separable Convex Optimization with Nested Lower and Upper Constraints","authors":"Thibaut Vidal, Daniel Gribel, Patrick Jaillet","doi":"10.1287/IJOO.2018.0004","DOIUrl":"https://doi.org/10.1287/IJOO.2018.0004","url":null,"abstract":"We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large range of applications, including production...","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2018.0004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48959141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales 基于上鞅的随机信赖域方法的收敛速度分析
INFORMS journal on optimization Pub Date : 2016-09-23 DOI: 10.1287/IJOO.2019.0016
J. Blanchet, C. Cartis, M. Menickelly, K. Scheinberg
{"title":"Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales","authors":"J. Blanchet, C. Cartis, M. Menickelly, K. Scheinberg","doi":"10.1287/IJOO.2019.0016","DOIUrl":"https://doi.org/10.1287/IJOO.2019.0016","url":null,"abstract":"We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analyzing properties of an underlying generic stochastic process, in particular by deriving a bound on the expected stopping time of this process. We utilize this framework to analyze the bounds on expected global convergence rates of a stochastic variant of a traditional trust region method, introduced in cite{ChenMenickellyScheinberg2014}. While traditional trust region methods rely on exact computations of the gradient, Hessian and values of the objective function, this method assumes that these values are available up to some dynamically adjusted accuracy. Moreover, this accuracy is assumed to hold only with some sufficiently large, but fixed, probability, without any additional restrictions on the variance of the errors. This setting applies, for example, to standard stochastic optimization and machine learning formulations. Improving upon the analysis in cite{ChenMenickellyScheinberg2014}, we show that the stochastic process defined by the algorithm satisfies the assumptions of our proposed general framework, with the stopping time defined as reaching accuracy $|nabla f(x)|leq epsilon$. The resulting bound for this stopping time is $O(epsilon^{-2})$, under the assumption of sufficiently accurate stochastic gradient, and is the first global complexity bound for a stochastic trust-region method. Finally, we apply the same framework to derive second order complexity bound under some additional assumptions.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/IJOO.2019.0016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66363408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
The Power and Limits of Predictive Approaches to Observational Data-Driven Optimization: The Case of Pricing 观察数据驱动优化的预测方法的力量和局限性:定价案例
INFORMS journal on optimization Pub Date : 2016-05-08 DOI: 10.1287/ijoo.2022.0077
D. Bertsimas, Nathan Kallus
{"title":"The Power and Limits of Predictive Approaches to Observational Data-Driven Optimization: The Case of Pricing","authors":"D. Bertsimas, Nathan Kallus","doi":"10.1287/ijoo.2022.0077","DOIUrl":"https://doi.org/10.1287/ijoo.2022.0077","url":null,"abstract":"We consider data-driven decision making in which data on historical decisions and outcomes are endogenous and lack the necessary features for causal identification (e.g., unconfoundedness or instruments), focusing on data-driven pricing. We study approaches that, for lack of better alternative, optimize the prediction of objective (revenue) given decision (price). Whereas data-driven decision making is transforming modern operations, most large-scale data are observational, with which confounding is inevitable and the strong assumptions necessary for causal identification are dubious. Nonetheless, the inevitable statistical biases may be irrelevant if impact on downstream optimization performance is limited. This paper seeks to formalize and empirically study this question. First, to study the power of decision making with confounded data, by leveraging a special optimization structure, we develop bounds on the suboptimality of pricing using the (noncausal) prediction of historical demand given price. Second, to study the limits of decision making with confounded data, we develop a new hypothesis test for optimality with respect to the true average causal effect on the objective and apply it to interest rate–setting data to assesses whether performance can be distinguished from optimal to statistical significance in practice. Our empirical study demonstrates that predictive approaches can generally be powerful in practice with some limitations.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66363416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信