Classification-Based Unified Cache Replacement via Partitioned Victim Address History

Eishi Arima
{"title":"Classification-Based Unified Cache Replacement via Partitioned Victim Address History","authors":"Eishi Arima","doi":"10.1109/DSD51259.2020.00027","DOIUrl":null,"url":null,"abstract":"In modern microprocessors, lower level cache memories are usually implemented as unified caches where different classes of cachelines such as data, instructions, and Page Table Entries (PTEs) coexist. Particularly, frequent PTE accesses following after TLB missies can happen on modern systems, which is driven by the increasing demands of applications for larger working set size, and this trend naturally leads to significant conflicts among these different kinds of cachelines.This paper targets the emerging conflict problem and provides a systematic mechanism using a partitioned victim address history. Prior studies have shown the effectiveness of history-based cache managements to predict the reuseness and thus to improve the hit rate. This work augments the following functionalities: (1) partitioning the history into multiple areas to separately keep track of the reuseness for all the different cacheline categories; and (2) setting different allocation priorities to the different cacheline categories when cache replacement. Furthermore, this paper proposes a control system to dynamically optimize the history partitions and the cache allocation priorities at the same time by using the statistics of the history structure. The experimental result indicates that the proposed technique improves performance considerably compared with the conventional LRU-based approach and others.","PeriodicalId":128527,"journal":{"name":"2020 23rd Euromicro Conference on Digital System Design (DSD)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 23rd Euromicro Conference on Digital System Design (DSD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSD51259.2020.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In modern microprocessors, lower level cache memories are usually implemented as unified caches where different classes of cachelines such as data, instructions, and Page Table Entries (PTEs) coexist. Particularly, frequent PTE accesses following after TLB missies can happen on modern systems, which is driven by the increasing demands of applications for larger working set size, and this trend naturally leads to significant conflicts among these different kinds of cachelines.This paper targets the emerging conflict problem and provides a systematic mechanism using a partitioned victim address history. Prior studies have shown the effectiveness of history-based cache managements to predict the reuseness and thus to improve the hit rate. This work augments the following functionalities: (1) partitioning the history into multiple areas to separately keep track of the reuseness for all the different cacheline categories; and (2) setting different allocation priorities to the different cacheline categories when cache replacement. Furthermore, this paper proposes a control system to dynamically optimize the history partitions and the cache allocation priorities at the same time by using the statistics of the history structure. The experimental result indicates that the proposed technique improves performance considerably compared with the conventional LRU-based approach and others.
基于分类的统一缓存替换通过分区受害者地址历史
在现代微处理器中,低级缓存存储器通常被实现为统一缓存,其中不同类型的缓存(如数据、指令和页表项)共存。特别是,在现代系统中,TLB任务之后可能会出现频繁的PTE访问,这是由于应用程序对更大的工作集大小的需求不断增加所驱动的,这种趋势自然会导致这些不同类型的缓存之间发生重大冲突。本文针对新出现的冲突问题,提出了一种利用分区受害者地址历史的系统机制。先前的研究表明,基于历史的缓存管理在预测重用性从而提高命中率方面是有效的。这项工作增加了以下功能:(1)将历史划分为多个区域,以单独跟踪所有不同cacheline类别的重用性;(2)在缓存替换时,对不同的缓存类别设置不同的分配优先级。在此基础上,提出了一种利用历史结构的统计信息同时动态优化历史分区和缓存分配优先级的控制系统。实验结果表明,与传统的基于lru的方法和其他方法相比,该方法的性能有了很大的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信