{"title":"Dynamic Energy Optimization in Chip Multiprocessors Using Deep Neural Networks","authors":"Milad Ghorbani Moghaddam;Wenkai Guan;Cristinel Ababei","doi":"10.1109/TMSCS.2018.2870438","DOIUrl":null,"url":null,"abstract":"We investigate the use of deep neural network (DNN) models for energy optimization under performance constraints in chip multiprocessor systems. We introduce a dynamic energy management algorithm implemented in three phases. In the first phase, training data is collected by running several selected instrumented benchmarks. A training data point represents a pair of values of cores’ workload characteristics and of optimal voltage/frequency (V/F) pairs. This phase employs Kalman filtering for workload prediction and an efficient heuristic algorithm based on dynamic voltage and frequency scaling. The second phase represents the training process of the DNN model. In the last phase, the DNN model is used to directly identify V/F pairs that can achieve lower energy consumption without performance degradation beyond the acceptable threshold set by the user. Simulation results on 16 and 64 core network-on-chip based architectures demonstrate that the proposed approach can achieve up to 55 percent energy reduction for 10 percent performance degradation constraints. In addition, the proposed DNN approach is compared against existing approaches based on reinforcement learning and Kalman filtering and found that it provides average improvements in energy-delay-product (EDP) of 6.3 and 6 percent for the 16 core architecture and of 7.4 and 5.5 percent for the 64 core architecture.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"649-661"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2870438","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multi-Scale Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/8466912/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
We investigate the use of deep neural network (DNN) models for energy optimization under performance constraints in chip multiprocessor systems. We introduce a dynamic energy management algorithm implemented in three phases. In the first phase, training data is collected by running several selected instrumented benchmarks. A training data point represents a pair of values of cores’ workload characteristics and of optimal voltage/frequency (V/F) pairs. This phase employs Kalman filtering for workload prediction and an efficient heuristic algorithm based on dynamic voltage and frequency scaling. The second phase represents the training process of the DNN model. In the last phase, the DNN model is used to directly identify V/F pairs that can achieve lower energy consumption without performance degradation beyond the acceptable threshold set by the user. Simulation results on 16 and 64 core network-on-chip based architectures demonstrate that the proposed approach can achieve up to 55 percent energy reduction for 10 percent performance degradation constraints. In addition, the proposed DNN approach is compared against existing approaches based on reinforcement learning and Kalman filtering and found that it provides average improvements in energy-delay-product (EDP) of 6.3 and 6 percent for the 16 core architecture and of 7.4 and 5.5 percent for the 64 core architecture.