{"title":"Motivating Next-Generation OS Physical Memory Management for Terabyte-Scale NVMMs","authors":"Shivank Garg, Aravinda Prasad, Debadatta Mishra, Sreenivas Subramoney","doi":"arxiv-2310.03370","DOIUrl":null,"url":null,"abstract":"Software managed byte-addressable hybrid memory systems consisting of DRAMs\nand NVMMs offer a lot of flexibility to design efficient large scale data\nprocessing applications. Operating systems (OS) play an important role in\nenabling the applications to realize the integrated benefits of DRAMs' low\naccess latency and NVMMs' large capacity along with its persistent\ncharacteristics. In this paper, we comprehensively analyze the performance of\nconventional OS physical memory management subsystems that were designed only\nbased on the DRAM memory characteristics in the context of modern hybrid\nbyte-addressable memory systems. To study the impact of high access latency and large capacity of NVMMs on\nphysical memory management, we perform an extensive evaluation on Linux with\nIntel's Optane NVMM. We observe that the core memory management functionalities\nsuch as page allocation are negatively impacted by high NVMM media latency,\nwhile functionalities such as conventional fragmentation management are\nrendered inadequate. We also demonstrate that certain traditional memory\nmanagement functionalities are affected by neither aspects of modern NVMMs. We\nconclusively motivate the need to overhaul fundamental aspects of traditional\nOS physical memory management in order to fully exploit terabyte-scale NVMMs.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2310.03370","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Software managed byte-addressable hybrid memory systems consisting of DRAMs
and NVMMs offer a lot of flexibility to design efficient large scale data
processing applications. Operating systems (OS) play an important role in
enabling the applications to realize the integrated benefits of DRAMs' low
access latency and NVMMs' large capacity along with its persistent
characteristics. In this paper, we comprehensively analyze the performance of
conventional OS physical memory management subsystems that were designed only
based on the DRAM memory characteristics in the context of modern hybrid
byte-addressable memory systems. To study the impact of high access latency and large capacity of NVMMs on
physical memory management, we perform an extensive evaluation on Linux with
Intel's Optane NVMM. We observe that the core memory management functionalities
such as page allocation are negatively impacted by high NVMM media latency,
while functionalities such as conventional fragmentation management are
rendered inadequate. We also demonstrate that certain traditional memory
management functionalities are affected by neither aspects of modern NVMMs. We
conclusively motivate the need to overhaul fundamental aspects of traditional
OS physical memory management in order to fully exploit terabyte-scale NVMMs.