{"title":"Energy-Aware Caching","authors":"Wei Zhang, Rui Fan, Fang Liu, Pan Lai","doi":"10.1109/ICPADS.2015.66","DOIUrl":null,"url":null,"abstract":"To achieve higher performance, cache sizes have been steadily increasing in computer processors and network systems. But caches are often over-provisioned for peak demand and underutilized in typical non-peak workloads. As caches consume substantial power, this results in significant amounts of wasted energy. To address this, existing works turn off parts of the cache when they do not contribute to higher performance. However, while these methods are effective empirically, they lack provable performance bounds. In addition, existing works focus on processor caches and are not applicable to network caches where data size and cost can vary. In this paper, we study the energy-aware caching (EAC) problem, and seek to minimize the total cost incurred due to cache misses and energy consumption. We propose three algorithms to solve different variants of this problem. The first is an optimal offline algorithm that runs in O(kn log n) time for a size k cache and n cache accesses. Then, we propose a simple online algorithm for uniform data size and cost that is 2 + h/(h-h+1 competitive compared to an optimal algorithm with a size h ≤ k cache. Lastly, we propose a 2 + h-1/(h-h+1) competitive online algorithm that allows arbitrary data sizes and costs. We give an efficient implementation of the algorithm that takes O(log k) amortized time per cache access, and also present an adaptive version that reacts to workload patterns to achieve better real-world performance. Using trace driven simulations, we show our algorithm has substantially lower cost than algorithms focused on maximizing cache hit rates or minimizing energy usage alone.","PeriodicalId":231517,"journal":{"name":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS.2015.66","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
To achieve higher performance, cache sizes have been steadily increasing in computer processors and network systems. But caches are often over-provisioned for peak demand and underutilized in typical non-peak workloads. As caches consume substantial power, this results in significant amounts of wasted energy. To address this, existing works turn off parts of the cache when they do not contribute to higher performance. However, while these methods are effective empirically, they lack provable performance bounds. In addition, existing works focus on processor caches and are not applicable to network caches where data size and cost can vary. In this paper, we study the energy-aware caching (EAC) problem, and seek to minimize the total cost incurred due to cache misses and energy consumption. We propose three algorithms to solve different variants of this problem. The first is an optimal offline algorithm that runs in O(kn log n) time for a size k cache and n cache accesses. Then, we propose a simple online algorithm for uniform data size and cost that is 2 + h/(h-h+1 competitive compared to an optimal algorithm with a size h ≤ k cache. Lastly, we propose a 2 + h-1/(h-h+1) competitive online algorithm that allows arbitrary data sizes and costs. We give an efficient implementation of the algorithm that takes O(log k) amortized time per cache access, and also present an adaptive version that reacts to workload patterns to achieve better real-world performance. Using trace driven simulations, we show our algorithm has substantially lower cost than algorithms focused on maximizing cache hit rates or minimizing energy usage alone.