Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi

Fei Guo, Seongbeom Kim, Y. Baskakov, Ishan Banerjee
{"title":"Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi","authors":"Fei Guo, Seongbeom Kim, Y. Baskakov, Ishan Banerjee","doi":"10.1145/2731186.2731187","DOIUrl":null,"url":null,"abstract":"VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.","PeriodicalId":186972,"journal":{"name":"Proceedings of the 11th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2731186.2731187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

Abstract

VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.
主动拆分大页面以提高VMware ESXi中的内存复用性能
VMware ESXi利用现代Intel/AMD cpu中可用的MMU虚拟化硬件支持。为了优化在这样的cpu上运行时的地址转换性能,ESXi最好使用主机大页(在x86-64系统中为2MB)来支持VM的来宾内存。虽然当主机有足够的空闲内存时,使用主机大页面可以提供最佳性能,但它会增加主机内存压力,并有效地破坏页面共享。因此,主机更有可能达到ESXi必须通过更昂贵的技术(如膨胀或主机交换)回收VM内存的点。因此,使用主机大型页面可能会严重损害整合率。为了解决这个问题,我们提出了一个新的主机大页面管理策略,它允许:a)识别“冷”大页面,并在主机有足够的空闲内存时破坏它们;B)当主机空闲内存变得稀缺时,但在主机开始膨胀或交换之前,主动中断所有大页面;C)通过页面共享回收破碎的大页面中的小页面。使用新策略,可以更早地共享可共享的小页面,并且当主机内存压力很大时,需要膨胀或交换的内存量可以大大减少。我们还提出了一种算法,在使用VM大页面可共享性估计器主动分解大页面时动态调整页面共享率,以提高效率。实验结果表明,当主机内存压力较大时,所提出的大页面管理策略通过显著减少膨胀或交换内存的数量,可以将各种工作负载的性能提高2.1倍。当内存压力较低时,应用程序仍然完全受益于主机大页面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信