PADD: Power Aware Domain Distribution

M. Lim, F. Rawson, T. Bletsch, V. Freeh
{"title":"PADD: Power Aware Domain Distribution","authors":"M. Lim, F. Rawson, T. Bletsch, V. Freeh","doi":"10.1109/ICDCS.2009.47","DOIUrl":null,"url":null,"abstract":"Modern data centers usually have computing resources sized to handle expected peak demand, but average demand is generally much lower than peak. This means that the systems in the data center usually operate at very low utilization rates. Past techniques have exploited this fact to achieve significant power savings, but they generally focus on centrally managed, throughput-oriented systems that process a single fine-grained request stream. We propose a more general solution — a technique to save power by dynamically migrating virtual machines and packing them onto fewer physical machines when possible. We call our scheme Power-Aware Domain Distribution (PADD). In this paper, we report on simulation results for PADD and demonstrate that the power and performance changes from using PADD are primarily dependent on how much buffering or reserve capacity it maintains. Our adaptive buffering scheme achieves energy savings within 7% of the idealized system that has no performance penalty. Our results also show that we can achieve an energy savings up to 70% with fewer than 1% of the requests violating their service level agreements.","PeriodicalId":387968,"journal":{"name":"2009 29th IEEE International Conference on Distributed Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 29th IEEE International Conference on Distributed Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS.2009.47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46

Abstract

Modern data centers usually have computing resources sized to handle expected peak demand, but average demand is generally much lower than peak. This means that the systems in the data center usually operate at very low utilization rates. Past techniques have exploited this fact to achieve significant power savings, but they generally focus on centrally managed, throughput-oriented systems that process a single fine-grained request stream. We propose a more general solution — a technique to save power by dynamically migrating virtual machines and packing them onto fewer physical machines when possible. We call our scheme Power-Aware Domain Distribution (PADD). In this paper, we report on simulation results for PADD and demonstrate that the power and performance changes from using PADD are primarily dependent on how much buffering or reserve capacity it maintains. Our adaptive buffering scheme achieves energy savings within 7% of the idealized system that has no performance penalty. Our results also show that we can achieve an energy savings up to 70% with fewer than 1% of the requests violating their service level agreements.
PADD: Power - Aware Domain Distribution
现代数据中心通常有计算资源来处理预期的峰值需求,但平均需求通常远低于峰值。这意味着数据中心中的系统通常以非常低的利用率运行。过去的技术已经利用这一事实实现了显著的节能,但它们通常侧重于集中管理的、面向吞吐量的系统,这些系统处理单个细粒度请求流。我们提出了一种更通用的解决方案——一种通过动态迁移虚拟机并尽可能将它们打包到更少的物理机上来节省电力的技术。我们称之为功率感知域分布(PADD)。在本文中,我们报告了PADD的仿真结果,并证明了使用PADD的功率和性能变化主要取决于它维护了多少缓冲或备用容量。我们的自适应缓冲方案在没有性能损失的情况下实现了理想系统7%的节能。我们的结果还表明,在不到1%的请求违反其服务水平协议的情况下,我们可以实现高达70%的能源节约。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信