Sandeep Kumar, Aravinda Prasad, S. Sarangi, S. Subramoney
{"title":"Radiant: efficient page table management for tiered memory systems","authors":"Sandeep Kumar, Aravinda Prasad, S. Sarangi, S. Subramoney","doi":"10.1145/3459898.3463907","DOIUrl":null,"url":null,"abstract":"Modern enterprise servers are increasingly embracing tiered memory systems with a combination of low latency DRAMs and large capacity but high latency non-volatile main memories (NVMMs) such as Intel’s Optane DC PMM. Prior works have focused on the efficient placement and migration of data on a tiered memory system, but have not studied the optimal placement of page tables. Explicit and efficient placement of page tables is crucial for large memory footprint applications with high TLB miss rates because they incur dramatically higher page walk latency when page table pages are placed in NVMM. We show that (i) page table pages can end up on NVMM even when enough DRAM memory is available and (ii) page table pages that spill over to NVMM due to DRAM memory pressure are not migrated back later when memory is available in DRAM. We study the performance impact of page table placement in a tiered memory system and propose Radiant, an efficient and transparent page table management technique that (i) applies different placement policies for data and page table pages,(ii) introduces a differentiating policy for page table pages by placing a small but critical part of the page table in DRAM, and (iii) dynamically and judiciously manages the rest of the page table by transparently migrating the page table pages between DRAM and NVMM. Our implementation on a real system equipped with Intel’s Optane NVMM running Linux reduces the page table walk cycles by 12% and total cycles by 20% on an average. This improves the runtime by 20% on an average for a set of synthetic and real-world large memory footprint applications when compared with various default Linux kernel techniques.","PeriodicalId":307528,"journal":{"name":"Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459898.3463907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Modern enterprise servers are increasingly embracing tiered memory systems with a combination of low latency DRAMs and large capacity but high latency non-volatile main memories (NVMMs) such as Intel’s Optane DC PMM. Prior works have focused on the efficient placement and migration of data on a tiered memory system, but have not studied the optimal placement of page tables. Explicit and efficient placement of page tables is crucial for large memory footprint applications with high TLB miss rates because they incur dramatically higher page walk latency when page table pages are placed in NVMM. We show that (i) page table pages can end up on NVMM even when enough DRAM memory is available and (ii) page table pages that spill over to NVMM due to DRAM memory pressure are not migrated back later when memory is available in DRAM. We study the performance impact of page table placement in a tiered memory system and propose Radiant, an efficient and transparent page table management technique that (i) applies different placement policies for data and page table pages,(ii) introduces a differentiating policy for page table pages by placing a small but critical part of the page table in DRAM, and (iii) dynamically and judiciously manages the rest of the page table by transparently migrating the page table pages between DRAM and NVMM. Our implementation on a real system equipped with Intel’s Optane NVMM running Linux reduces the page table walk cycles by 12% and total cycles by 20% on an average. This improves the runtime by 20% on an average for a set of synthetic and real-world large memory footprint applications when compared with various default Linux kernel techniques.
现代企业服务器越来越多地采用分层存储系统,该系统结合了低延迟dram和大容量但高延迟的非易失性主存储器(nvmm),如英特尔的Optane DC PMM。先前的工作主要集中在分层存储系统中数据的有效放置和迁移,但没有研究页表的最佳放置。对于具有高TLB缺失率的大内存占用应用程序来说,显式和有效地放置页表是至关重要的,因为当将页表页面放置在NVMM中时,它们会导致显着更高的页遍历延迟。我们展示了(i)页表页面可以在NVMM上结束,即使有足够的DRAM内存可用;(ii)由于DRAM内存压力溢出到NVMM的页表页面在DRAM中可用时不会迁移回来。我们研究了页表放置在分层存储系统中的性能影响,并提出了Radiant,这是一种高效透明的页表管理技术,它(i)对数据和页表页面应用不同的放置策略,(ii)通过将页表的一小部分但至关重要的部分放置在DRAM中,为页表页面引入差异化策略。(iii)通过透明地在DRAM和NVMM之间迁移页表页面,动态地、明智地管理页表的其余部分。我们在一个实际系统上的实现配备了运行Linux的英特尔Optane NVMM,平均减少了12%的页表遍历周期和20%的总周期。与各种默认的Linux内核技术相比,对于一组合成的和实际的大内存占用应用程序,这将使运行时平均提高20%。