{"title":"Competitive Control via Online Optimization with Memory, Delayed Feedback, and Inexact Predictions","authors":"Guanya Shi","doi":"10.1109/CISS50987.2021.9400281","DOIUrl":null,"url":null,"abstract":"Recently a line of work has shown the applicability of tools from online optimization for control, leading to online control algorithms with learning-theoretic guarantees, such as sublinear regret. However, the predominant benchmark, static regret, only compares to the best static linear controller in hindsight, which could be arbitrarily sub-optimal compared to the true offline optimal policy in non-stationary environments. Moreover, the common robustness considerations in control theory literature, such as feedback delays and inexact predictions, only have little progress in the context of online learning/optimization guarantees. In this talk, based on our three recent papers, I will present key principles and practical algorithms towards online control with competitive ratio guarantees, which directly bound the suboptimality compared to the true offline optimal policy. First, I will show the deep connections between a novel class of online optimization with memory and online control, which directly translates online optimization guarantees to online control guarantees and gives the first constant-competitive policy with adversarial disturbances [1]. Second, I will analyze the performance of the most popular online policy in the control community, Model Predictive Control (MPC), from the online learning's perspective, and show a few important fundamental limits. Our results give the first finite-time performance guarantees for MPC [3]. Finally, I will discuss the influence of delayed feedback and inexact predictions on competitive ratio analysis [2].","PeriodicalId":228112,"journal":{"name":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS50987.2021.9400281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Recently a line of work has shown the applicability of tools from online optimization for control, leading to online control algorithms with learning-theoretic guarantees, such as sublinear regret. However, the predominant benchmark, static regret, only compares to the best static linear controller in hindsight, which could be arbitrarily sub-optimal compared to the true offline optimal policy in non-stationary environments. Moreover, the common robustness considerations in control theory literature, such as feedback delays and inexact predictions, only have little progress in the context of online learning/optimization guarantees. In this talk, based on our three recent papers, I will present key principles and practical algorithms towards online control with competitive ratio guarantees, which directly bound the suboptimality compared to the true offline optimal policy. First, I will show the deep connections between a novel class of online optimization with memory and online control, which directly translates online optimization guarantees to online control guarantees and gives the first constant-competitive policy with adversarial disturbances [1]. Second, I will analyze the performance of the most popular online policy in the control community, Model Predictive Control (MPC), from the online learning's perspective, and show a few important fundamental limits. Our results give the first finite-time performance guarantees for MPC [3]. Finally, I will discuss the influence of delayed feedback and inexact predictions on competitive ratio analysis [2].