Dynamic Energy Optimization in Chip Multiprocessors Using Deep Neural Networks

Milad Ghorbani Moghaddam;Wenkai Guan;Cristinel Ababei
{"title":"Dynamic Energy Optimization in Chip Multiprocessors Using Deep Neural Networks","authors":"Milad Ghorbani Moghaddam;Wenkai Guan;Cristinel Ababei","doi":"10.1109/TMSCS.2018.2870438","DOIUrl":null,"url":null,"abstract":"We investigate the use of deep neural network (DNN) models for energy optimization under performance constraints in chip multiprocessor systems. We introduce a dynamic energy management algorithm implemented in three phases. In the first phase, training data is collected by running several selected instrumented benchmarks. A training data point represents a pair of values of cores’ workload characteristics and of optimal voltage/frequency (V/F) pairs. This phase employs Kalman filtering for workload prediction and an efficient heuristic algorithm based on dynamic voltage and frequency scaling. The second phase represents the training process of the DNN model. In the last phase, the DNN model is used to directly identify V/F pairs that can achieve lower energy consumption without performance degradation beyond the acceptable threshold set by the user. Simulation results on 16 and 64 core network-on-chip based architectures demonstrate that the proposed approach can achieve up to 55 percent energy reduction for 10 percent performance degradation constraints. In addition, the proposed DNN approach is compared against existing approaches based on reinforcement learning and Kalman filtering and found that it provides average improvements in energy-delay-product (EDP) of 6.3 and 6 percent for the 16 core architecture and of 7.4 and 5.5 percent for the 64 core architecture.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"649-661"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2870438","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multi-Scale Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/8466912/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

We investigate the use of deep neural network (DNN) models for energy optimization under performance constraints in chip multiprocessor systems. We introduce a dynamic energy management algorithm implemented in three phases. In the first phase, training data is collected by running several selected instrumented benchmarks. A training data point represents a pair of values of cores’ workload characteristics and of optimal voltage/frequency (V/F) pairs. This phase employs Kalman filtering for workload prediction and an efficient heuristic algorithm based on dynamic voltage and frequency scaling. The second phase represents the training process of the DNN model. In the last phase, the DNN model is used to directly identify V/F pairs that can achieve lower energy consumption without performance degradation beyond the acceptable threshold set by the user. Simulation results on 16 and 64 core network-on-chip based architectures demonstrate that the proposed approach can achieve up to 55 percent energy reduction for 10 percent performance degradation constraints. In addition, the proposed DNN approach is compared against existing approaches based on reinforcement learning and Kalman filtering and found that it provides average improvements in energy-delay-product (EDP) of 6.3 and 6 percent for the 16 core architecture and of 7.4 and 5.5 percent for the 64 core architecture.
基于深度神经网络的芯片多处理器动态能量优化
我们研究了在芯片多处理器系统中,在性能约束下使用深度神经网络(DNN)模型进行能量优化。我们介绍了一种分三个阶段实现的动态能量管理算法。在第一阶段,通过运行几个选定的仪器基准来收集训练数据。训练数据点表示核心的工作负载特性和最佳电压/频率(V/F)对的一对值。该阶段采用卡尔曼滤波进行工作量预测,并采用基于动态电压和频率缩放的高效启发式算法。第二阶段表示DNN模型的训练过程。在最后一个阶段,DNN模型用于直接识别V/F对,该V/F对可以实现较低的能耗,而性能退化不会超过用户设置的可接受阈值。基于16和64核片上网络架构的仿真结果表明,在10%的性能退化约束下,所提出的方法可以实现高达55%的能量降低。此外,将所提出的DNN方法与基于强化学习和卡尔曼滤波的现有方法进行了比较,发现它在16核架构的能量延迟乘积(EDP)方面提供了6.3%和6%的平均改进,在64核架构的情况下提供了7.4%和5.5%的平均改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信