多核处理器中线程感知的动态共享缓存压缩

Yuejian Xie, G. Loh
{"title":"多核处理器中线程感知的动态共享缓存压缩","authors":"Yuejian Xie, G. Loh","doi":"10.1109/ICCD.2011.6081388","DOIUrl":null,"url":null,"abstract":"When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.","PeriodicalId":354015,"journal":{"name":"2011 IEEE 29th International Conference on Computer Design (ICCD)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Thread-aware dynamic shared cache compression in multi-core processors\",\"authors\":\"Yuejian Xie, G. Loh\",\"doi\":\"10.1109/ICCD.2011.6081388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.\",\"PeriodicalId\":354015,\"journal\":{\"name\":\"2011 IEEE 29th International Conference on Computer Design (ICCD)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 29th International Conference on Computer Design (ICCD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCD.2011.6081388\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 29th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2011.6081388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

当程序的工作集超过其最后一级缓存的大小时,由于所产生的片外内存访问,性能可能会受到影响。缓存压缩可以增加有效的缓存大小,从而减少丢失,但是压缩也会引入访问延迟,因为缓存线在使用之前需要解压缩。缓存压缩可以帮助一些应用程序,但会损害另一些应用程序,这取决于当前运行程序的工作集和潜在的压缩比。以前的研究提出了动态压缩以适应程序行为的技术。在多核共享缓存的上下文中,压缩决策变得更加有趣,因为缓存由多个应用程序共享,这些应用程序可能从压缩缓存中获益不同。本文提出了线程感知动态缓存压缩(TADCC),以便在每个线程的基础上做出更好的压缩决策。访问时间跟踪器(ATT)可以估计不同压缩决策的访问延迟。ATT由决策交换滤波器(DSF)支持,提供稳定性和鲁棒性。因此,TADCC的性能比先前提出的自适应缓存压缩技术平均高出8%,最高可达17%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Thread-aware dynamic shared cache compression in multi-core processors
When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信