Mark Semelhago, B. Nelson, A. Wächter, Eunhye Song
{"title":"Computational methods for optimization via simulation using Gaussian Markov Random Fields","authors":"Mark Semelhago, B. Nelson, A. Wächter, Eunhye Song","doi":"10.1109/WSC.2017.8247941","DOIUrl":null,"url":null,"abstract":"There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.","PeriodicalId":145780,"journal":{"name":"2017 Winter Simulation Conference (WSC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Winter Simulation Conference (WSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSC.2017.8247941","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.