{"title":"Classification-Based Unified Cache Replacement via Partitioned Victim Address History","authors":"Eishi Arima","doi":"10.1109/DSD51259.2020.00027","DOIUrl":null,"url":null,"abstract":"In modern microprocessors, lower level cache memories are usually implemented as unified caches where different classes of cachelines such as data, instructions, and Page Table Entries (PTEs) coexist. Particularly, frequent PTE accesses following after TLB missies can happen on modern systems, which is driven by the increasing demands of applications for larger working set size, and this trend naturally leads to significant conflicts among these different kinds of cachelines.This paper targets the emerging conflict problem and provides a systematic mechanism using a partitioned victim address history. Prior studies have shown the effectiveness of history-based cache managements to predict the reuseness and thus to improve the hit rate. This work augments the following functionalities: (1) partitioning the history into multiple areas to separately keep track of the reuseness for all the different cacheline categories; and (2) setting different allocation priorities to the different cacheline categories when cache replacement. Furthermore, this paper proposes a control system to dynamically optimize the history partitions and the cache allocation priorities at the same time by using the statistics of the history structure. The experimental result indicates that the proposed technique improves performance considerably compared with the conventional LRU-based approach and others.","PeriodicalId":128527,"journal":{"name":"2020 23rd Euromicro Conference on Digital System Design (DSD)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 23rd Euromicro Conference on Digital System Design (DSD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSD51259.2020.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In modern microprocessors, lower level cache memories are usually implemented as unified caches where different classes of cachelines such as data, instructions, and Page Table Entries (PTEs) coexist. Particularly, frequent PTE accesses following after TLB missies can happen on modern systems, which is driven by the increasing demands of applications for larger working set size, and this trend naturally leads to significant conflicts among these different kinds of cachelines.This paper targets the emerging conflict problem and provides a systematic mechanism using a partitioned victim address history. Prior studies have shown the effectiveness of history-based cache managements to predict the reuseness and thus to improve the hit rate. This work augments the following functionalities: (1) partitioning the history into multiple areas to separately keep track of the reuseness for all the different cacheline categories; and (2) setting different allocation priorities to the different cacheline categories when cache replacement. Furthermore, this paper proposes a control system to dynamically optimize the history partitions and the cache allocation priorities at the same time by using the statistics of the history structure. The experimental result indicates that the proposed technique improves performance considerably compared with the conventional LRU-based approach and others.