Olumuyiwa Ibidunmoye, M. H. Moghadam, Ewnetu Bayuh Lakew, E. Elmroth
{"title":"Adaptive Service Performance Control using Cooperative Fuzzy Reinforcement Learning in Virtualized Environments","authors":"Olumuyiwa Ibidunmoye, M. H. Moghadam, Ewnetu Bayuh Lakew, E. Elmroth","doi":"10.1145/3147213.3147225","DOIUrl":null,"url":null,"abstract":"Designing efficient control mechanisms to meet strict performance requirements with respect to changing workload demands without sacrificing resource efficiency remains a challenge in cloud infrastructures. A popular approach is fine-grained resource provisioning via auto-scaling mechanisms that rely on either threshold-based adaptation rules or sophisticated queuing/control-theoretic models. While it is difficult at design time to specify optimal threshold rules, it is even more challenging inferring precise performance models for the multitude of services. Recently, reinforcement learning have been applied to address this challenge. However, such approaches require many learning trials to stabilize at the beginning and when operational conditions vary thereby limiting their application under dynamic workloads. To this end, we extend the standard reinforcement learning approach in two ways: a) we formulate the system state as a fuzzy space and b) exploit a set of cooperative agents to explore multiple fuzzy states in parallel to speed up learning. Through multiple experiments on a real virtualized testbed, we demonstrate that our approach converges quickly, meets performance targets at high efficiency without explicit service models.","PeriodicalId":341011,"journal":{"name":"Proceedings of the10th International Conference on Utility and Cloud Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the10th International Conference on Utility and Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3147213.3147225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Designing efficient control mechanisms to meet strict performance requirements with respect to changing workload demands without sacrificing resource efficiency remains a challenge in cloud infrastructures. A popular approach is fine-grained resource provisioning via auto-scaling mechanisms that rely on either threshold-based adaptation rules or sophisticated queuing/control-theoretic models. While it is difficult at design time to specify optimal threshold rules, it is even more challenging inferring precise performance models for the multitude of services. Recently, reinforcement learning have been applied to address this challenge. However, such approaches require many learning trials to stabilize at the beginning and when operational conditions vary thereby limiting their application under dynamic workloads. To this end, we extend the standard reinforcement learning approach in two ways: a) we formulate the system state as a fuzzy space and b) exploit a set of cooperative agents to explore multiple fuzzy states in parallel to speed up learning. Through multiple experiments on a real virtualized testbed, we demonstrate that our approach converges quickly, meets performance targets at high efficiency without explicit service models.