INFORMS journal on optimization最新文献

筛选
英文 中文
Satisficing Models Under Uncertainty 不确定性下的满意模型
INFORMS journal on optimization Pub Date : 2022-02-16 DOI: 10.1287/ijoo.2021.0070
P. Jaillet, S. D. Jena, T. S. Ng, Melvyn Sim
{"title":"Satisficing Models Under Uncertainty","authors":"P. Jaillet, S. D. Jena, T. S. Ng, Melvyn Sim","doi":"10.1287/ijoo.2021.0070","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0070","url":null,"abstract":"Satisficing, as an approach to decision making under uncertainty, aims at achieving solutions that satisfy the problem’s constraints as well as possible. Mathematical optimization problems that are related to this form of decision making include the P-model. In this paper, we propose a general framework of satisficing decision criteria and show a representation termed the S-model, of which the P-model and robust optimization models are special cases. We then focus on the linear optimization case and obtain a tractable probabilistic S-model, termed the T-model, whose objective is a lower bound of the P-model. We show that when probability densities of the uncertainties are log-concave, the T-model can admit a tractable concave objective function. In the case of discrete probability distributions, the T-model is a linear mixed integer optimization problem of moderate dimensions. Our computational experiments on a stochastic maximum coverage problem suggest that the T-model solutions can be highly competitive compared with standard sample average approximation models.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42940325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Solution Approach to Distributionally Robust Joint-Chance-Constrained Assignment Problems 分布鲁棒联合机会约束分配问题的一种求解方法
INFORMS journal on optimization Pub Date : 2022-02-03 DOI: 10.1287/ijoo.2021.0060
Shanshan Wang, Jinlin Li, Sanjay Mehrotra
{"title":"A Solution Approach to Distributionally Robust Joint-Chance-Constrained Assignment Problems","authors":"Shanshan Wang, Jinlin Li, Sanjay Mehrotra","doi":"10.1287/ijoo.2021.0060","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0060","url":null,"abstract":"We study the assignment problem with chance constraints (CAP) and its distributionally robust counterpart DR-CAP. We present a technique for estimating big-M in such a formulation that takes advantage of the ambiguity set. We consider a 0-1 bilinear knapsack set to develop valid inequalities for CAP and DR-CAP. This is generalized to the joint chance constraint problem. A probability cut framework is also developed to solve DR-CAP. A computational study on problem instances obtained from using real hospital surgery data shows that the developed techniques allow us to solve certain model instances and reduce the computational time for others. The use of Wasserstein ambiguity set in the DR-CAP model improves the out-of-sample performance of satisfying the chance constraints more significantly than the one possible by increasing the sample size in the sample average approximation technique. The solution time for DR-CAP model instances is of the same order as that for solving the CAP instances. This finding is important because chance constrained optimization models are very difficult to solve when the coefficients in the constraints are random.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49215646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Learning in Sequential Bilevel Linear Programming 序贯双层线性规划的学习
INFORMS journal on optimization Pub Date : 2022-01-27 DOI: 10.1287/ijoo.2021.0063
J. S. Borrero, O. Prokopyev, Denis Sauré
{"title":"Learning in Sequential Bilevel Linear Programming","authors":"J. S. Borrero, O. Prokopyev, Denis Sauré","doi":"10.1287/ijoo.2021.0063","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0063","url":null,"abstract":"We consider a framework for sequential bilevel linear programming in which a leader and a follower interact over multiple time periods. In each period, the follower observes the actions taken by the leader and reacts optimally, according to the follower’s own objective function, which is initially unknown to the leader. By observing various forms of information feedback from the follower’s actions, the leader is able to refine the leader’s knowledge about the follower’s objective function and, hence, adjust the leader’s actions at subsequent time periods, which ought to help in maximizing the leader’s cumulative benefit. We show that greedy and robust policies adapted from previous work in the max-min (symmetric) setting might fail to recover the optimal full-information solution to the problem (i.e., a solution implemented by an oracle with complete prior knowledge of the follower’s objective function) in the asymmetric case. In contrast, we present a family of greedy and best-case policies that are able to recover the full-information optimal solution and also provide real-time certificates of optimality. In addition, we show that the proposed policies can be computed by solving a series of linear mixed-integer programs. We test policy performance through exhaustive numerical experiments in the context of asymmetric shortest path interdiction, considering various forms of feedback and several benchmark policies.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44789023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimization Under Connected Uncertainty 连通不确定性下的优化
INFORMS journal on optimization Pub Date : 2022-01-24 DOI: 10.1287/ijoo.2021.0067
O. Nohadani, Kartikey Sharma
{"title":"Optimization Under Connected Uncertainty","authors":"O. Nohadani, Kartikey Sharma","doi":"10.1287/ijoo.2021.0067","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0067","url":null,"abstract":"Robust optimization methods have shown practical advantages in a wide range of decision-making applications under uncertainty. Recently, their efficacy has been extended to multiperiod settings. Current approaches model uncertainty either independent of the past or in an implicit fashion by budgeting the aggregate uncertainty. In many applications, however, past realizations directly influence future uncertainties. For this class of problems, we develop a modeling framework that explicitly incorporates this dependence via connected uncertainty sets, whose parameters at each period depend on previous uncertainty realizations. To find optimal here-and-now solutions, we reformulate robust and distributionally robust constraints for popular set structures and demonstrate this modeling framework numerically on broadly applicable knapsack and portfolio-optimization problems.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49657945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Linear Convergence of Extragradient Methods for Nonconvex–Nonconcave Minimax Problems 非凸非凹极小极大问题的外聚方法的线性收敛性
INFORMS journal on optimization Pub Date : 2022-01-17 DOI: 10.1287/ijoo.2022.0004
Saeed Hajizadeh, Haihao Lu, Benjamin Grimmer
{"title":"On the Linear Convergence of Extragradient Methods for Nonconvex–Nonconcave Minimax Problems","authors":"Saeed Hajizadeh, Haihao Lu, Benjamin Grimmer","doi":"10.1287/ijoo.2022.0004","DOIUrl":"https://doi.org/10.1287/ijoo.2022.0004","url":null,"abstract":"Recently, minimax optimization has received renewed focus due to modern applications in machine learning, robust optimization, and reinforcement learning. The scale of these applications naturally leads to the use of first-order methods. However, the nonconvexities and nonconcavities present in these problems, prevents the application of typical gradient descent/ascent, which is known to diverge even in bilinear problems. Recently, it was shown that the proximal point method (PPM) converges linearly for a family of nonconvex–nonconcave problems. In this paper, we study the convergence of a damped version of the extragradient method (EGM), which avoids potentially costly proximal computations, relying only on gradient evaluation. We show that the EGM converges linearly for smooth minimax optimization problems satisfying the same nonconvex–nonconcave condition needed by the PPM. Funding: H. Lu was supported by The University of Chicago Booth School of Business Benjamin Grimmer was supported by Johns Hopkins Applied Mathematics and Statistics Department.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47660954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optimal Order Batching in Warehouse Management: A Data-Driven Robust Approach 仓库管理中的最优订单批处理:数据驱动的鲁棒方法
INFORMS journal on optimization Pub Date : 2022-01-07 DOI: 10.1287/ijoo.2021.0066
Vedat Bayram, Gohram Baloch, Fatma Gzara, S. Elhedhli
{"title":"Optimal Order Batching in Warehouse Management: A Data-Driven Robust Approach","authors":"Vedat Bayram, Gohram Baloch, Fatma Gzara, S. Elhedhli","doi":"10.1287/ijoo.2021.0066","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0066","url":null,"abstract":"Optimizing warehouse processes has direct impact on supply chain responsiveness, timely order fulfillment, and customer satisfaction. In this work, we focus on the picking process in warehouse management and study it from a data perspective. Using historical data from an industrial partner, we introduce, model, and study the robust order batching problem (ROBP) that groups orders into batches to minimize total order processing time accounting for uncertainty caused by system congestion and human behavior. We provide a generalizable, data-driven approach that overcomes warehouse-specific assumptions characterizing most of the work in the literature. We analyze historical data to understand the processes in the warehouse, to predict processing times, and to improve order processing. We introduce the ROBP and develop an efficient learning-based branch-and-price algorithm based on simultaneous column and row generation, embedded with alternative prediction models such as linear regression and random forest that predict processing time of a batch. We conduct extensive computational experiments to test the performance of the proposed approach and to derive managerial insights based on real data. The data-driven prescriptive analytics tool we propose achieves savings of seven to eight minutes per order, which translates into a 14.8% increase in daily picking operations capacity of the warehouse.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47803680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Sample Average Approximation Using Distributional Robustness 利用分布鲁棒性改进样本均值逼近
INFORMS journal on optimization Pub Date : 2021-12-30 DOI: 10.1287/ijoo.2021.0061
E. Anderson, A. Philpott
{"title":"Improving Sample Average Approximation Using Distributional Robustness","authors":"E. Anderson, A. Philpott","doi":"10.1287/ijoo.2021.0061","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0061","url":null,"abstract":"Sample average approximation is a popular approach to solving stochastic optimization problems. It has been widely observed that some form of robustification of these problems often improves the out-of-sample performance of the solution estimators. In estimation problems, this improvement boils down to a trade-off between the opposing effects of bias and shrinkage. This paper aims to characterize the features of more general optimization problems that exhibit this behaviour when a distributionally robust version of the sample average approximation problem is used. The paper restricts attention to quadratic problems for which sample average approximation solutions are unbiased and shows that expected out-of-sample performance can be calculated for small amounts of robustification and depends on the type of distributionally robust model used and properties of the underlying ground-truth probability distribution of random variables. The paper was written as part of a New Zealand funded research project that aimed to improve stochastic optimization methods in the electric power industry. The authors of the paper have worked together in this domain for the past 25 years.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42340678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Portfolio Optimization Under Regime Switching and Transaction Costs: Combining Neural Networks and Dynamic Programs 制度交换与交易成本下的投资组合优化:神经网络与动态规划的结合
INFORMS journal on optimization Pub Date : 2021-10-21 DOI: 10.1287/ijoo.2021.0053
Xiaoyue Li, J. Mulvey
{"title":"Portfolio Optimization Under Regime Switching and Transaction Costs: Combining Neural Networks and Dynamic Programs","authors":"Xiaoyue Li, J. Mulvey","doi":"10.1287/ijoo.2021.0053","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0053","url":null,"abstract":"The contributions of this paper are threefold. First, by combining dynamic programs and neural networks, we provide an efficient numerical method to solve a large multiperiod portfolio allocation problem under regime-switching market and transaction costs. Second, the performance of our combined method is shown to be close to optimal in a stylized case. To our knowledge, this is the first paper to carry out such a comparison. Last, the superiority of the combined method opens up the possibility for more research on financial applications of generic methods, such as neural networks, provided that solutions to simplified subproblems are available via traditional methods. The research on combining fast starts with neural networks began about four years ago. We observed that Professor Weinan E’s approach for solving systems of differential equations by neural networks had much improved performance when starting close to an optimal solution and could stall if the current iterate was far from an optimal solution. As we all know, this behavior is common with Newton- based algorithms. As a consequence, we discovered that combining a system of differential equations with a feedforward neural network could much improve overall computational performance. In this paper, we follow a similar direction for dynamic portfolio optimization within a regime-switching market with transaction costs. It investigates how to improve efficiency by combining dynamic programming with a recurrent neural network. Traditional methods face the curse of dimensionality. In contrast, the running time of our combined approach grows approximately linearly with the number of risky assets. It is inspiring to explore the possibilities of combined methods in financial management, believing a careful linkage of existing dynamic optimization algorithms and machine learning will be an active domain going forward. Relationship of the authors: Professor John M. Mulvey is Xiaoyue Li’s doctoral advisor.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43587852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Augmented Lagrangian–Based First-Order Methods for Convex-Constrained Programs with Weakly Convex Objective 弱凸目标凸约束规划的一阶增广拉格朗日方法
INFORMS journal on optimization Pub Date : 2021-10-18 DOI: 10.1287/ijoo.2021.0052
Zichong Li, Yangyang Xu
{"title":"Augmented Lagrangian–Based First-Order Methods for Convex-Constrained Programs with Weakly Convex Objective","authors":"Zichong Li, Yangyang Xu","doi":"10.1287/ijoo.2021.0052","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0052","url":null,"abstract":"First-order methods (FOMs) have been widely used for solving large-scale problems. A majority of existing works focus on problems without constraint or with simple constraints. Several recent works have studied FOMs for problems with complicated functional constraints. In this paper, we design a novel augmented Lagrangian (AL)–based FOM for solving problems with nonconvex objective and convex constraint functions. The new method follows the framework of the proximal point (PP) method. On approximately solving PP subproblems, it mixes the usage of the inexact AL method (iALM) and the quadratic penalty method, whereas the latter is always fed with estimated multipliers by the iALM. The proposed method achieves the best-known complexity result to produce a near Karush–Kuhn–Tucker (KKT) point. Theoretically, the hybrid method has a lower iteration-complexity requirement than its counterpart that only uses iALM to solve PP subproblems; numerically, it can perform significantly better than a pure-penalty-based method. Numerical experiments are conducted on nonconvex linearly constrained quadratic programs. The numerical results demonstrate the efficiency of the proposed methods over existing ones.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42346767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Critical-Path-Search Logic-Based Benders Decomposition Approaches for Flexible Job Shop Scheduling 基于关键路径搜索逻辑的柔性车间调度Benders分解方法
INFORMS journal on optimization Pub Date : 2021-08-02 DOI: 10.1287/ijoo.2021.0056
B. Naderi, V. Roshanaei
{"title":"Critical-Path-Search Logic-Based Benders Decomposition Approaches for Flexible Job Shop Scheduling","authors":"B. Naderi, V. Roshanaei","doi":"10.1287/ijoo.2021.0056","DOIUrl":"https://doi.org/10.1287/ijoo.2021.0056","url":null,"abstract":"We solve the flexible job shop scheduling problems (F-JSSPs) to minimize makespan. First, we compare the constraint programming (CP) model with the mixed-integer programming (MIP) model for F-JSSPs. Second, we exploit the decomposable structure within the models and develop an efficient CP–logic-based Benders decomposition (CP-LBBD) technique that combines the complementary strengths of MIP and CP models. Using 193 instances from the literature, we demonstrate that MIP, CP, and CP-LBBD achieve average optimality gaps of 25.50%, 13.46%, and 0.37% and find optima in 49, 112, and 156 instances of the problem, respectively. We also compare the performance of the CP-LBBD with an efficient Greedy Randomized Adaptive Search Procedure (GRASP) algorithm, which has been appraised for finding 125 optima on 178 instances. CP-LBBD finds 143 optima on the same set of instances. We further examine the performance of the algorithms on 96 newly (and much larger) generated instances and demonstrate that the average optimality gap of the CP increases to 47.26%, whereas the average optimality of CP-LBBD remains around 1.44%. Finally, we conduct analytics on the performance of our models and algorithms and counterintuitively find out that as flexibility increases in data sets the performance CP-LBBD ameliorates, whereas that of the CP and MIP significantly deteriorates.","PeriodicalId":73382,"journal":{"name":"INFORMS journal on optimization","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49452897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信