ML-based Reinforcement Learning Approach for Power Management in SoCs

D. Akselrod
{"title":"ML-based Reinforcement Learning Approach for Power Management in SoCs","authors":"D. Akselrod","doi":"10.1109/SOCC46988.2019.1570548498","DOIUrl":null,"url":null,"abstract":"This paper presents a machine learning-based reinforcement learning approach, mapping Finite State Machines, traditionally used for power management control in SoCs, to Markov Decision Process (MDP)-based agents for controlling power management features of Integrated Circuits with application to complex multiprocessor-based SoCs such as CPUs, APUs and GPUs. We present the problem of decision-based control of a number of power management features in ICs consisting of numerous heterogeneous IPs. An infinite-horizon fully observable MDPs are utilized to obtain a policy of actions maximizing the expectation of the formulated Power Management utility function. The approach balances the demand for desired performance while providing an optimal power saving as opposed to commonly used FSM-based power management techniques. MDP framework was employed for power management decision-making under conditions of uncertainly for reinforcement learning. We describe in detail converting power management FSMs into infinite-horizon fully observable MDPs. The approach optimizes itself using reinforcement learning based on specified reward structure and previous performance, yielding an optimal and dynamically adjusted power management mechanism in respect to the formulated model.","PeriodicalId":253998,"journal":{"name":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCC46988.2019.1570548498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a machine learning-based reinforcement learning approach, mapping Finite State Machines, traditionally used for power management control in SoCs, to Markov Decision Process (MDP)-based agents for controlling power management features of Integrated Circuits with application to complex multiprocessor-based SoCs such as CPUs, APUs and GPUs. We present the problem of decision-based control of a number of power management features in ICs consisting of numerous heterogeneous IPs. An infinite-horizon fully observable MDPs are utilized to obtain a policy of actions maximizing the expectation of the formulated Power Management utility function. The approach balances the demand for desired performance while providing an optimal power saving as opposed to commonly used FSM-based power management techniques. MDP framework was employed for power management decision-making under conditions of uncertainly for reinforcement learning. We describe in detail converting power management FSMs into infinite-horizon fully observable MDPs. The approach optimizes itself using reinforcement learning based on specified reward structure and previous performance, yielding an optimal and dynamically adjusted power management mechanism in respect to the formulated model.
基于ml的soc电源管理强化学习方法
本文提出了一种基于机器学习的强化学习方法,将传统上用于soc电源管理控制的有限状态机映射到基于马尔可夫决策过程(MDP)的代理,用于控制集成电路的电源管理功能,并应用于复杂的基于多处理器的soc,如cpu, apu和gpu。我们提出了在由众多异构ip组成的集成电路中基于决策的许多电源管理功能控制问题。利用一个无限视界完全可观察的mdp来获得一个使所制定的电源管理效用函数的期望最大化的行动策略。与常用的基于fsm的电源管理技术相比,该方法在提供最佳节能的同时,平衡了对所需性能的需求。采用MDP框架对不确定条件下的电源管理决策进行强化学习。我们详细描述了将电源管理fsm转换为无限视界完全可观察的mdp。该方法利用基于指定奖励结构和先前绩效的强化学习对自身进行优化,从而产生相对于所述模型的最优和动态调整的功率管理机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信