基于HDFS的云数据中心的主动重复制策略

T. Shwe, M. Aritsugi
{"title":"基于HDFS的云数据中心的主动重复制策略","authors":"T. Shwe, M. Aritsugi","doi":"10.1145/3147213.3147221","DOIUrl":null,"url":null,"abstract":"Cloud storage systems use data replication for fault tolerance, data availability and load balancing. In the presence of node failures, data blocks on the failed nodes are re-replicated to other remaining nodes in the system randomly, thus leading to workload imbalance. Balancing all the server workloads namely, re-replication workload and current running user's application workload during the re-replication phase has not been adequately addressed. With a reactive approach, re-replication can be scheduled based on current resource utilization but by the time replication kicks off, actual resource usage may have changed as resources are continuously in use. In this paper, we propose a proactive re-replication strategy that uses predicted CPU utilization, predicted disk utilization, and popularity of the replicas to perform re-replication effectively while ensuring all the server workloads are balanced. We consider both reliability of a data block and performance status of nodes in making decision for re-replication. Simulation results from synthetic workload data demonstrate that all the servers' utilization is balanced and our approach improves performance in terms of re-replication throughput and re-replication time compared to baseline Hadoop Distributed File System (HDFS). Our proactive approach maintains the balance of resource utilization and avoids the occurrence of servers' overload condition during re-replication.","PeriodicalId":341011,"journal":{"name":"Proceedings of the10th International Conference on Utility and Cloud Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Proactive Re-replication Strategy in HDFS based Cloud Data Center\",\"authors\":\"T. Shwe, M. Aritsugi\",\"doi\":\"10.1145/3147213.3147221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud storage systems use data replication for fault tolerance, data availability and load balancing. In the presence of node failures, data blocks on the failed nodes are re-replicated to other remaining nodes in the system randomly, thus leading to workload imbalance. Balancing all the server workloads namely, re-replication workload and current running user's application workload during the re-replication phase has not been adequately addressed. With a reactive approach, re-replication can be scheduled based on current resource utilization but by the time replication kicks off, actual resource usage may have changed as resources are continuously in use. In this paper, we propose a proactive re-replication strategy that uses predicted CPU utilization, predicted disk utilization, and popularity of the replicas to perform re-replication effectively while ensuring all the server workloads are balanced. We consider both reliability of a data block and performance status of nodes in making decision for re-replication. Simulation results from synthetic workload data demonstrate that all the servers' utilization is balanced and our approach improves performance in terms of re-replication throughput and re-replication time compared to baseline Hadoop Distributed File System (HDFS). Our proactive approach maintains the balance of resource utilization and avoids the occurrence of servers' overload condition during re-replication.\",\"PeriodicalId\":341011,\"journal\":{\"name\":\"Proceedings of the10th International Conference on Utility and Cloud Computing\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the10th International Conference on Utility and Cloud Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3147213.3147221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the10th International Conference on Utility and Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3147213.3147221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

云存储系统使用数据复制实现容错、数据可用性和负载均衡。当节点出现故障时,故障节点上的数据块被随机地重新复制到系统中其他剩余节点上,从而导致工作负载不平衡。在重新复制阶段平衡所有服务器工作负载(即重新复制工作负载和当前运行的用户应用程序工作负载)的问题还没有得到充分解决。使用响应式方法,可以根据当前的资源利用率来安排重新复制,但是当复制开始时,实际的资源使用情况可能会随着资源的持续使用而发生变化。在本文中,我们提出了一种主动的重复制策略,该策略使用预测的CPU利用率、预测的磁盘利用率和副本的流行程度来有效地执行重复制,同时确保所有服务器工作负载均衡。在进行重复制决策时,我们考虑了数据块的可靠性和节点的性能状况。来自合成工作负载数据的模拟结果表明,所有服务器的利用率都是平衡的,与基线Hadoop分布式文件系统(HDFS)相比,我们的方法在重复复制吞吐量和重复复制时间方面提高了性能。我们的主动方式保持了资源利用的平衡,避免了重复制过程中服务器过载的情况发生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Proactive Re-replication Strategy in HDFS based Cloud Data Center
Cloud storage systems use data replication for fault tolerance, data availability and load balancing. In the presence of node failures, data blocks on the failed nodes are re-replicated to other remaining nodes in the system randomly, thus leading to workload imbalance. Balancing all the server workloads namely, re-replication workload and current running user's application workload during the re-replication phase has not been adequately addressed. With a reactive approach, re-replication can be scheduled based on current resource utilization but by the time replication kicks off, actual resource usage may have changed as resources are continuously in use. In this paper, we propose a proactive re-replication strategy that uses predicted CPU utilization, predicted disk utilization, and popularity of the replicas to perform re-replication effectively while ensuring all the server workloads are balanced. We consider both reliability of a data block and performance status of nodes in making decision for re-replication. Simulation results from synthetic workload data demonstrate that all the servers' utilization is balanced and our approach improves performance in terms of re-replication throughput and re-replication time compared to baseline Hadoop Distributed File System (HDFS). Our proactive approach maintains the balance of resource utilization and avoids the occurrence of servers' overload condition during re-replication.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信