Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu
{"title":"受害者:通过利用未充分利用的缓存资源大幅增加地址转换范围","authors":"Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu","doi":"arxiv-2310.04158","DOIUrl":null,"url":null,"abstract":"Address translation is a performance bottleneck in data-intensive workloads\ndue to large datasets and irregular access patterns that lead to frequent\nhigh-latency page table walks (PTWs). PTWs can be reduced by using (i) large\nhardware TLBs or (ii) large software-managed TLBs. Unfortunately, both\nsolutions have significant drawbacks: increased access latency, power and area\n(for hardware TLBs), and costly memory accesses, the need for large contiguous\nmemory blocks, and complex OS modifications (for software-managed TLBs). We\npresent Victima, a new software-transparent mechanism that drastically\nincreases the translation reach of the processor by leveraging the\nunderutilized resources of the cache hierarchy. The key idea of Victima is to\nrepurpose L2 cache blocks to store clusters of TLB entries, thereby providing\nan additional low-latency and high-capacity component that backs up the\nlast-level TLB and thus reduces PTWs. Victima has two main components. First, a\nPTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on\nthe frequency and cost of the PTWs they lead to. Second, a TLB-aware cache\nreplacement policy prioritizes keeping TLB entries in the cache hierarchy by\nconsidering (i) the translation pressure (e.g., last-level TLB miss rate) and\n(ii) the reuse characteristics of the TLB entries. Our evaluation results show\nthat in native (virtualized) execution environments Victima improves average\nend-to-end application performance by 7.4% (28.7%) over the baseline four-level\nradix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art\nsoftware-managed TLB, across 11 diverse data-intensive workloads. Victima (i)\nis effective in both native and virtualized environments, (ii) is completely\ntransparent to application and system software, and (iii) incurs very small\narea and power overheads on a modern high-end CPU.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"24 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources\",\"authors\":\"Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu\",\"doi\":\"arxiv-2310.04158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Address translation is a performance bottleneck in data-intensive workloads\\ndue to large datasets and irregular access patterns that lead to frequent\\nhigh-latency page table walks (PTWs). PTWs can be reduced by using (i) large\\nhardware TLBs or (ii) large software-managed TLBs. Unfortunately, both\\nsolutions have significant drawbacks: increased access latency, power and area\\n(for hardware TLBs), and costly memory accesses, the need for large contiguous\\nmemory blocks, and complex OS modifications (for software-managed TLBs). We\\npresent Victima, a new software-transparent mechanism that drastically\\nincreases the translation reach of the processor by leveraging the\\nunderutilized resources of the cache hierarchy. The key idea of Victima is to\\nrepurpose L2 cache blocks to store clusters of TLB entries, thereby providing\\nan additional low-latency and high-capacity component that backs up the\\nlast-level TLB and thus reduces PTWs. Victima has two main components. First, a\\nPTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on\\nthe frequency and cost of the PTWs they lead to. Second, a TLB-aware cache\\nreplacement policy prioritizes keeping TLB entries in the cache hierarchy by\\nconsidering (i) the translation pressure (e.g., last-level TLB miss rate) and\\n(ii) the reuse characteristics of the TLB entries. Our evaluation results show\\nthat in native (virtualized) execution environments Victima improves average\\nend-to-end application performance by 7.4% (28.7%) over the baseline four-level\\nradix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art\\nsoftware-managed TLB, across 11 diverse data-intensive workloads. Victima (i)\\nis effective in both native and virtualized environments, (ii) is completely\\ntransparent to application and system software, and (iii) incurs very small\\narea and power overheads on a modern high-end CPU.\",\"PeriodicalId\":501333,\"journal\":{\"name\":\"arXiv - CS - Operating Systems\",\"volume\":\"24 6\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Operating Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2310.04158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2310.04158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Address translation is a performance bottleneck in data-intensive workloads
due to large datasets and irregular access patterns that lead to frequent
high-latency page table walks (PTWs). PTWs can be reduced by using (i) large
hardware TLBs or (ii) large software-managed TLBs. Unfortunately, both
solutions have significant drawbacks: increased access latency, power and area
(for hardware TLBs), and costly memory accesses, the need for large contiguous
memory blocks, and complex OS modifications (for software-managed TLBs). We
present Victima, a new software-transparent mechanism that drastically
increases the translation reach of the processor by leveraging the
underutilized resources of the cache hierarchy. The key idea of Victima is to
repurpose L2 cache blocks to store clusters of TLB entries, thereby providing
an additional low-latency and high-capacity component that backs up the
last-level TLB and thus reduces PTWs. Victima has two main components. First, a
PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on
the frequency and cost of the PTWs they lead to. Second, a TLB-aware cache
replacement policy prioritizes keeping TLB entries in the cache hierarchy by
considering (i) the translation pressure (e.g., last-level TLB miss rate) and
(ii) the reuse characteristics of the TLB entries. Our evaluation results show
that in native (virtualized) execution environments Victima improves average
end-to-end application performance by 7.4% (28.7%) over the baseline four-level
radix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art
software-managed TLB, across 11 diverse data-intensive workloads. Victima (i)
is effective in both native and virtualized environments, (ii) is completely
transparent to application and system software, and (iii) incurs very small
area and power overheads on a modern high-end CPU.