{"title":"Cache design for eliminating the address translation bottleneck and reducing the tag area cost","authors":"Yen-Jen Chang, F. Lai, S. Ruan","doi":"10.1109/ICCD.2002.1106791","DOIUrl":null,"url":null,"abstract":"For physical caches, the address translation delay can be partially masked, but it is hard to avoid completely. In this paper, we propose a cache partition architecture, called paged cache, which not only masks the address translation delay completely but also reduces the tag area dramatically. In the paged cache, we divide the entire cache into a set of partitions, and each partition is dedicated to only one page cached in the TLB. By restricting the range in which the cached block can be placed, we can eliminate the total or partial tag depending on the partition size. In addition, because the paged cache can be accessed without waiting for the generation of physical address, i.e., the paged cache and the TLB are accessed in parallel, the extended cache access time can be reduced significantly. We use SimpleScalar to simulate SPEC2000 benchmarks and perform HSPICE simulations (with a 0.18 /spl mu/m technology and 1.8 V voltage supply) to evaluate the proposed architecture. Experimental results show that the paged cache is very effective in reducing tag area of the on-chip Ll caches, while the average extended cache access time can be improved dramatically.","PeriodicalId":164768,"journal":{"name":"Proceedings. IEEE International Conference on Computer Design: VLSI in Computers and Processors","volume":"189 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Conference on Computer Design: VLSI in Computers and Processors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2002.1106791","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
For physical caches, the address translation delay can be partially masked, but it is hard to avoid completely. In this paper, we propose a cache partition architecture, called paged cache, which not only masks the address translation delay completely but also reduces the tag area dramatically. In the paged cache, we divide the entire cache into a set of partitions, and each partition is dedicated to only one page cached in the TLB. By restricting the range in which the cached block can be placed, we can eliminate the total or partial tag depending on the partition size. In addition, because the paged cache can be accessed without waiting for the generation of physical address, i.e., the paged cache and the TLB are accessed in parallel, the extended cache access time can be reduced significantly. We use SimpleScalar to simulate SPEC2000 benchmarks and perform HSPICE simulations (with a 0.18 /spl mu/m technology and 1.8 V voltage supply) to evaluate the proposed architecture. Experimental results show that the paged cache is very effective in reducing tag area of the on-chip Ll caches, while the average extended cache access time can be improved dramatically.