{"title":"Sample and Computationally Efficient Stochastic Kriging in High Dimensions","authors":"Liang Ding, Xiaowei Zhang","doi":"10.1287/opre.2022.2367","DOIUrl":"https://doi.org/10.1287/opre.2022.2367","url":null,"abstract":"High-dimensional Simulation Metamodeling Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of complex simulation models. However, its use is limited to cases where the design space is low-dimensional because the sample complexity (i.e., the number of design points required to produce an accurate prediction) grows exponentially in the dimensionality of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need to invert large covariance matrices. To address this long-standing challenge, Liang Ding and Xiaowei Zhang, in their recent paper “Sample and Computationally Efficient Stochastic Kriging in High Dimensions”, develop a novel methodology — based on tensor Markov kernels and sparse grid experimental designs — that dramatically alleviates the curse of dimensionality. The proposed methodology has theoretical guarantees on both sample complexity and computational complexity and shows outstanding performance in numerical problems of as high as 16,675 dimensions.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"85 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73445628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Online Learning Approach to Dynamic Pricing and Capacity Sizing in Service Systems","authors":"Xinyun Chen, Yunan Liu, Guiyu Hong","doi":"10.1287/opre.2020.0612","DOIUrl":"https://doi.org/10.1287/opre.2020.0612","url":null,"abstract":"Online Learning in Queueing Systems Most queueing models have no analytic solutions, so previous research often resorts to heavy-traffic analysis for performance analysis and optimization, which requires the system scale (e.g., arrival and service rate) to grow to infinity. In “An Online Learning Approach to Dynamic Pricing and Capacity Sizing in Service Systems,” X. Chen, Y. Liu, and G. Hong develop a new “scale-free” online learning framework designed for optimizing a queueing system, called gradient-based online learning in queue (GOLiQ). GOLiQ prescribes an efficient procedure to obtain improved decisions in successive cycles using newly collected queueing data (e.g., arrival counts, waiting times, and busy times). Besides its robustness in the system scale, GOLiQ is advantageous when focusing on performance optimization in the long run because its data-driven nature enables it to constantly produce improved solutions which will eventually reach optimality. Effectiveness of GOLiQ is substantiated by theoretical regret analysis (with a logarithmic regret bound) and simulation experiments.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"130 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heavy-Traffic Universality of Redundancy Systems with Assignment Constraints","authors":"Ellen Cardinaels, S. Borst, J. V. van Leeuwaarden","doi":"10.1287/opre.2022.2385","DOIUrl":"https://doi.org/10.1287/opre.2022.2385","url":null,"abstract":"Modern service systems, like cloud computing platforms or data center environments, commonly face a high degree of heterogeneity. This heterogeneity is not only caused by different server speeds but also, by binding task-server relations that must be taken into account when assigning incoming tasks. Unfortunately, there are hardly any theoretical performance guarantees as these systems do not fall within the typical supermarket modeling framework which heavily relies on strong symmetry and homogeneity assumptions. In “Heavy-traffic universality of redundancy systems with assignment constraints,” Cardinaels, Borst, and van Leeuwaarden provide insight in the performance of these systems operating under redundancy scheduling policies. Surprisingly, when experiencing high demand, these systems exhibit state space collapse and can achieve a similar level of resource pooling and performance as a fully flexible system, even subject to quite strict task-server constraints.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"94 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83573011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaochun Meng, James W. Taylor, Souhaib Ben Taieb, Siran Li
{"title":"Scores for Multivariate Distributions and Level Sets","authors":"Xiaochun Meng, James W. Taylor, Souhaib Ben Taieb, Siran Li","doi":"10.1287/opre.2020.0365","DOIUrl":"https://doi.org/10.1287/opre.2020.0365","url":null,"abstract":"Evaluating Forecasts of Multivariate Probability Distributions Forecasts of multivariate probability distributions are required for a variety of applications. The availability of a score for a forecast is important for evaluating prediction accuracy, as well as estimating model parameters. In “Scores for Multivariate Distributions and Level Sets,” X. Meng, J. W. Taylor, S. Ben Taieb, and S. Li propose a theoretical framework that encompasses several existing scores for multivariate distributions and can be used to generate new scores. In some multivariate contexts, a forecast of a level set is needed, such as a density level set for anomaly detection or the level set of the cumulative distribution, which can be used as a measure of risk. This motivates consideration of scores for level sets. The authors show that such scores can be obtained by decomposing the scores developed for multivariate distributions. A simple numerical algorithm is presented to compute the scores, and practical applications are provided in the contexts of conditional value-at-risk for financial data and the combination of expert macroeconomic forecasts.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"351 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76583630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lagrangian Dual Decision Rules for Multistage Stochastic Mixed-Integer Programming","authors":"Maryam Daryalal, Merve Bodur, James R. Luedtke","doi":"10.1287/opre.2022.2366","DOIUrl":"https://doi.org/10.1287/opre.2022.2366","url":null,"abstract":"On Decision Rules for Multistage Stochastic Programs with Mixed-Integer Decisions Multistage stochastic programming is a field of stochastic optimization for addressing sequential decision-making problems defined over a stochastic process with a given probability distribution. The solution to such a problem is a decision rule (policy) that maps the history of observations to the decisions. Design of the decision rules in the presence of mixed-integer decisions is quite challenging. In “Lagrangian Dual Decision Rules for Multistage Stochastic Mixed-Integer Programming,” Daryalal, Bodur, and Luedtke introduce Lagrangian dual decision rules, where linear decision rules are applied to dual multipliers associated with Lagrangian duals of a multistage stochastic mixed-integer programming (MSMIP) model. The restricted decisions are then used in the development of new primal- and dual-bounding methods. This yields a new general-purpose approximation approach for MSMIP, free of strong assumptions made in the literature, such as stagewise independence or existence of a tractable-sized scenario-tree representation.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"48 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79364061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty Quantification and Exploration for Reinforcement Learning","authors":"Yi Zhu, Jing Dong, H. Lam","doi":"10.1287/opre.2023.2436","DOIUrl":"https://doi.org/10.1287/opre.2023.2436","url":null,"abstract":"Quantify the uncertainty to decide and explore better In statistical inference, large-sample behavior and confidence interval construction are fundamental in assessing the error and reliability of estimated quantities with respect to the data noises. In the paper “Uncertainty Quantification and Exploration for Reinforcement Learning”, Dong, Lam, and Zhu study the large sample behavior in the classic setting of reinforcement learning. They derive appropriate large-sample asymptotic distributions for the state-action value function (Q-value) and optimal value function estimations when data are collected from the underlying Markov chain. This allows one to evaluate the assertiveness of performances among different decisions. The tight uncertainty quantification also facilitates the development of a pure exploration policy by maximizing the worst-case relative discrepancy among the estimated Q-values (ratio of the mean squared difference to the variance). This exploration policy aims to collect informative training data to maximize the probability of learning the optimal reward collecting policy, and it achieves good empirical performance.","PeriodicalId":49809,"journal":{"name":"Military Operations Research","volume":"12 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2019-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79259161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}