Energy-Aware Caching

Wei Zhang, Rui Fan, Fang Liu, Pan Lai
{"title":"Energy-Aware Caching","authors":"Wei Zhang, Rui Fan, Fang Liu, Pan Lai","doi":"10.1109/ICPADS.2015.66","DOIUrl":null,"url":null,"abstract":"To achieve higher performance, cache sizes have been steadily increasing in computer processors and network systems. But caches are often over-provisioned for peak demand and underutilized in typical non-peak workloads. As caches consume substantial power, this results in significant amounts of wasted energy. To address this, existing works turn off parts of the cache when they do not contribute to higher performance. However, while these methods are effective empirically, they lack provable performance bounds. In addition, existing works focus on processor caches and are not applicable to network caches where data size and cost can vary. In this paper, we study the energy-aware caching (EAC) problem, and seek to minimize the total cost incurred due to cache misses and energy consumption. We propose three algorithms to solve different variants of this problem. The first is an optimal offline algorithm that runs in O(kn log n) time for a size k cache and n cache accesses. Then, we propose a simple online algorithm for uniform data size and cost that is 2 + h/(h-h+1 competitive compared to an optimal algorithm with a size h ≤ k cache. Lastly, we propose a 2 + h-1/(h-h+1) competitive online algorithm that allows arbitrary data sizes and costs. We give an efficient implementation of the algorithm that takes O(log k) amortized time per cache access, and also present an adaptive version that reacts to workload patterns to achieve better real-world performance. Using trace driven simulations, we show our algorithm has substantially lower cost than algorithms focused on maximizing cache hit rates or minimizing energy usage alone.","PeriodicalId":231517,"journal":{"name":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS.2015.66","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

To achieve higher performance, cache sizes have been steadily increasing in computer processors and network systems. But caches are often over-provisioned for peak demand and underutilized in typical non-peak workloads. As caches consume substantial power, this results in significant amounts of wasted energy. To address this, existing works turn off parts of the cache when they do not contribute to higher performance. However, while these methods are effective empirically, they lack provable performance bounds. In addition, existing works focus on processor caches and are not applicable to network caches where data size and cost can vary. In this paper, we study the energy-aware caching (EAC) problem, and seek to minimize the total cost incurred due to cache misses and energy consumption. We propose three algorithms to solve different variants of this problem. The first is an optimal offline algorithm that runs in O(kn log n) time for a size k cache and n cache accesses. Then, we propose a simple online algorithm for uniform data size and cost that is 2 + h/(h-h+1 competitive compared to an optimal algorithm with a size h ≤ k cache. Lastly, we propose a 2 + h-1/(h-h+1) competitive online algorithm that allows arbitrary data sizes and costs. We give an efficient implementation of the algorithm that takes O(log k) amortized time per cache access, and also present an adaptive version that reacts to workload patterns to achieve better real-world performance. Using trace driven simulations, we show our algorithm has substantially lower cost than algorithms focused on maximizing cache hit rates or minimizing energy usage alone.
节能意识缓存
为了实现更高的性能,计算机处理器和网络系统中的缓存大小一直在稳步增加。但是,对于峰值需求,缓存通常是过度配置的,而在典型的非峰值工作负载中,缓存的利用率往往不足。由于缓存消耗大量的功率,这将导致大量的能源浪费。为了解决这个问题,现有的工作将关闭部分缓存,当它们对提高性能没有贡献时。然而,尽管这些方法在经验上是有效的,但它们缺乏可证明的性能界限。此外,现有的工作主要集中在处理器缓存上,不适用于数据大小和成本不同的网络缓存。在本文中,我们研究了能量感知缓存(EAC)问题,并寻求最小化由于缓存丢失和能量消耗而产生的总成本。我们提出了三种算法来解决这个问题的不同变体。第一个是最优离线算法,对于大小为k的缓存和n次缓存访问,该算法在O(kn log n)时间内运行。然后,我们提出了一种简单的在线算法,用于统一数据大小和成本,与具有大小h≤k缓存的最优算法相比,该算法具有2 + h/(h-h+1)的竞争力。最后,我们提出了一个2 + h-1/(h-h+1)竞争性在线算法,该算法允许任意数据大小和成本。我们给出了一个算法的有效实现,每次缓存访问需要O(log k)平摊时间,并且还提供了一个对工作负载模式做出反应的自适应版本,以实现更好的实际性能。通过跟踪驱动的模拟,我们发现我们的算法比专注于最大化缓存命中率或最小化能耗的算法的成本要低得多。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信