Adaptive Optimization of Time-out Policy for Dynamic Power Management Based on SMCP

Q. Jiang, H. Xi, B. Yin
{"title":"Adaptive Optimization of Time-out Policy for Dynamic Power Management Based on SMCP","authors":"Q. Jiang, H. Xi, B. Yin","doi":"10.1109/CCA.2007.4389250","DOIUrl":null,"url":null,"abstract":"Based on reinforcement learning, an adaptive online optimization algorithm of time-out policy is presented for dynamic power management. First the time-out policy driven power-managed systems are formulated as semi-Markov control processes. Under this analytic model, the equivalent effect on performance-power trade-off of time-out and stochastic policies is probed, and the equivalent relation between these two types policies is derived. Then an adaptive optimization algorithm that combines gradient estimation online and stochastic approximation is proposed. This algorithm doesn't depend on the prior knowledge of system parameters, and can achieve a global optimum with less computational cost. Simulation results demonstrate the analytic results and the effectiveness of the proposed algorithm.","PeriodicalId":176828,"journal":{"name":"2007 IEEE International Conference on Control Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE International Conference on Control Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCA.2007.4389250","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Based on reinforcement learning, an adaptive online optimization algorithm of time-out policy is presented for dynamic power management. First the time-out policy driven power-managed systems are formulated as semi-Markov control processes. Under this analytic model, the equivalent effect on performance-power trade-off of time-out and stochastic policies is probed, and the equivalent relation between these two types policies is derived. Then an adaptive optimization algorithm that combines gradient estimation online and stochastic approximation is proposed. This algorithm doesn't depend on the prior knowledge of system parameters, and can achieve a global optimum with less computational cost. Simulation results demonstrate the analytic results and the effectiveness of the proposed algorithm.
基于SMCP的动态电源管理超时策略自适应优化
提出了一种基于强化学习的动态电源管理中超时策略的自适应在线优化算法。首先,将超时策略驱动的电源管理系统表述为半马尔可夫控制过程。在此分析模型下,探讨了超时策略和随机策略对性能-功率权衡的等效效应,并推导了两类策略之间的等效关系。然后提出了一种在线梯度估计和随机逼近相结合的自适应优化算法。该算法不依赖于系统参数的先验知识,能够以较小的计算量实现全局最优。仿真结果验证了分析结果和算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信