The Tag Filter Cache: An Energy-Efficient Approach

Joan J. Valls, J. Sahuquillo, Alberto Ros, M. E. Gómez
{"title":"The Tag Filter Cache: An Energy-Efficient Approach","authors":"Joan J. Valls, J. Sahuquillo, Alberto Ros, M. E. Gómez","doi":"10.1109/PDP.2015.58","DOIUrl":null,"url":null,"abstract":"Power consumption in current high-performance chip multiprocessors (CMPs) has become a major design concern. The current trend of increasing the core count aggravates this problem. On-chip caches consume a significant fraction of the total power budget. Most of the proposed techniques to reduce the energy consumption of these memory structures are at the cost of performance, which may become unacceptable for high-performance CMPs. On-chip caches in multi-core systems are usually deployed with a high associativity degree in order to enhance performance. Even first-level caches are currently implemented with eight ways. The concurrent access to all the ways in the cache set is costly in terms of energy. In this paper we propose an energy-efficient cache design, namely the Tag Filter Cache (TF-Cache) architecture, that filters some of the set ways during cache accesses, allowing to access only a subset of them without hurting the performance. Our cache for each way stores the lowest order tag bits in an auxiliary bit array and these bits are used to filter the ways that do not match those bits in the searched block tag. Experimental results show that, on average, the TF-Cache architecture reduces the dynamic power consumption up to 74.9% and 85.9% when applied to the L1 and L2 cache, respectively, for the studied applications.","PeriodicalId":285111,"journal":{"name":"2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDP.2015.58","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Power consumption in current high-performance chip multiprocessors (CMPs) has become a major design concern. The current trend of increasing the core count aggravates this problem. On-chip caches consume a significant fraction of the total power budget. Most of the proposed techniques to reduce the energy consumption of these memory structures are at the cost of performance, which may become unacceptable for high-performance CMPs. On-chip caches in multi-core systems are usually deployed with a high associativity degree in order to enhance performance. Even first-level caches are currently implemented with eight ways. The concurrent access to all the ways in the cache set is costly in terms of energy. In this paper we propose an energy-efficient cache design, namely the Tag Filter Cache (TF-Cache) architecture, that filters some of the set ways during cache accesses, allowing to access only a subset of them without hurting the performance. Our cache for each way stores the lowest order tag bits in an auxiliary bit array and these bits are used to filter the ways that do not match those bits in the searched block tag. Experimental results show that, on average, the TF-Cache architecture reduces the dynamic power consumption up to 74.9% and 85.9% when applied to the L1 and L2 cache, respectively, for the studied applications.
标签过滤器缓存:一种节能方法
当前高性能芯片多处理器(cmp)的功耗已成为设计中的一个主要问题。目前增加核心数量的趋势加剧了这一问题。片上缓存消耗了总功率预算的很大一部分。大多数提出的降低这些存储结构能耗的技术是以性能为代价的,这对于高性能cmp来说可能是不可接受的。在多核系统中,为了提高性能,通常采用高关联度部署片上缓存。即使是一级缓存目前也有8种实现方式。就能量而言,对缓存集中所有路径的并发访问是昂贵的。在本文中,我们提出了一种节能的缓存设计,即标签过滤缓存(TF-Cache)架构,它在缓存访问期间过滤一些设置的方式,允许只访问其中的一个子集而不损害性能。每种方式的缓存都将最低阶的标记位存储在辅助位数组中,这些位用于过滤与搜索块标记中那些位不匹配的方式。实验结果表明,在研究的应用中,TF-Cache架构在L1和L2缓存上平均分别降低了74.9%和85.9%的动态功耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信