{"title":"Thread-aware dynamic shared cache compression in multi-core processors","authors":"Yuejian Xie, G. Loh","doi":"10.1109/ICCD.2011.6081388","DOIUrl":null,"url":null,"abstract":"When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.","PeriodicalId":354015,"journal":{"name":"2011 IEEE 29th International Conference on Computer Design (ICCD)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 29th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2011.6081388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
When a program's working set exceeds the size of its last-level cache, performance may suffer due to the resulting off-chip memory accesses. Cache compression can increase the effective cache size and therefore reduce misses, but compression also introduces access latency because cache lines need to be decompressed before using. Cache compression can help some applications but hurt others, depending on the working set of the currently running program and the potential compression ratio. Previous studies proposed techniques to dynamically enable compression to adapt to the program's behavior. In the context of shared caches in multi-cores, the compression decision becomes more interesting because the cache is shared by multiple applications that may benefit differently from a compressed cache. This paper proposes Thread-Aware Dynamic Cache Compression (TADCC) to make better compression decisions on a per-thread basis. Access Time Tracker (ATT) can estimate the access latencies of different compression decisions. The ATT is supported by a Decision Switching Filter (DSF) that provides stability and robustness. As a result, TADCC outperforms a previously proposed adaptive cache compression technique by 8% on average and as much as 17%.