Load balancing in cloud-based content delivery networks using adaptive server activation/deactivation

M. Mashaly, P. Kuhn
{"title":"Load balancing in cloud-based content delivery networks using adaptive server activation/deactivation","authors":"M. Mashaly, P. Kuhn","doi":"10.1109/ICENGTECHNOL.2012.6396140","DOIUrl":null,"url":null,"abstract":"Content delivery networks have been widely used for many years providing service for millions of users. Lately, many of these networks are migrating to the cloud for its numerous advantages such as lower costs, increased performance, availability and flexibility in installing new resources. This paper introduces a new approach towards load balancing as well as power reduction in cloud-based content delivery networks allowing for a controlled scaling of operational parameters within the tradeoff between power consumption and quality of experience (QoE). By applying a new proposed adaptive server activation/ deactivation model at each data center in the cloud, unutilized servers at the data center can be switched off to reduce the power consumption. This adaptive model also allows the data center to maintain its performance while being utilized up to 95%. Performance measures of data centers are crucial as there are certain user service level agreements (SLA) for the cloud subscribers that should not be violated; most important is the latency for users' requests. The proposed load balancing algorithm benefits from limited latency for requests by only shifting the load off a data center when it is almost fully loaded. When the load in a data center exceeds a critical level, processes are migrated to underloaded data centers within the cloud. The self-controlled adaptive resource management supports easing of the cloud management.","PeriodicalId":149484,"journal":{"name":"2012 International Conference on Engineering and Technology (ICET)","volume":"53 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Engineering and Technology (ICET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICENGTECHNOL.2012.6396140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

Content delivery networks have been widely used for many years providing service for millions of users. Lately, many of these networks are migrating to the cloud for its numerous advantages such as lower costs, increased performance, availability and flexibility in installing new resources. This paper introduces a new approach towards load balancing as well as power reduction in cloud-based content delivery networks allowing for a controlled scaling of operational parameters within the tradeoff between power consumption and quality of experience (QoE). By applying a new proposed adaptive server activation/ deactivation model at each data center in the cloud, unutilized servers at the data center can be switched off to reduce the power consumption. This adaptive model also allows the data center to maintain its performance while being utilized up to 95%. Performance measures of data centers are crucial as there are certain user service level agreements (SLA) for the cloud subscribers that should not be violated; most important is the latency for users' requests. The proposed load balancing algorithm benefits from limited latency for requests by only shifting the load off a data center when it is almost fully loaded. When the load in a data center exceeds a critical level, processes are migrated to underloaded data centers within the cloud. The self-controlled adaptive resource management supports easing of the cloud management.
使用自适应服务器激活/停用的基于云的内容交付网络中的负载平衡
内容交付网络已被广泛使用多年,为数百万用户提供服务。最近,许多这样的网络正在迁移到云,因为它有许多优点,比如更低的成本、更高的性能、可用性和安装新资源的灵活性。本文介绍了一种在基于云的内容交付网络中实现负载平衡和功耗降低的新方法,该方法允许在功耗和体验质量(QoE)之间的权衡中控制操作参数的缩放。通过在云中的每个数据中心应用新的建议的自适应服务器激活/停用模型,可以关闭数据中心未使用的服务器,以减少功耗。这种自适应模型还允许数据中心在利用率高达95%的情况下保持其性能。数据中心的性能度量是至关重要的,因为针对云订阅者的某些用户服务水平协议(SLA)是不应该被违反的;最重要的是用户请求的延迟。所提出的负载平衡算法仅在数据中心几乎满载时才将负载从数据中心转移出去,从而受益于请求的有限延迟。当数据中心的负载超过临界水平时,流程将迁移到云内负载不足的数据中心。自我控制的自适应资源管理,简化了云管理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信