Sparker: Optimizing Spark for Heterogeneous Clusters

Nishank Garg, D. Janakiram
{"title":"Sparker: Optimizing Spark for Heterogeneous Clusters","authors":"Nishank Garg, D. Janakiram","doi":"10.1109/CloudCom2018.2018.00017","DOIUrl":null,"url":null,"abstract":"Spark is an in-memory big data analytics framework which has replaced Hadoop as the de facto standard for processing big data in cloud platforms. These frameworks run on cloud platforms where heterogeneity is a common scenario. Heterogeneity gets introduced due to the failure, addition or upgradation of nodes in the cloud platforms. It can arise from various factors such as variation in the number of CPU cores, amount of memory, disk read/write latencies across the nodes, etc. These factors have a significant impact on the performance of Spark jobs. Spark supports execution of a job on equal-sized executors which can result in under allocation of resources in a heterogeneous cluster. Insufficient resources can severely degrade the performance of CPU and memory intensive applications like machine learning, graph processing, etc. Existing techniques use equal-sized executors which can degrade the performance of jobs in heterogeneous environments. In this paper, we propose Sparker, an efficient resource-aware optimization strategy for Spark in heterogeneous clusters. It overcomes the limitation of heterogeneity in terms of CPU and memory resources by modifying the size of the executor. The executors are re-sized based on the available resources of the node. We have modified Spark source code to incorporate executor re-sizing strategy. Experimental evaluation on SparkBench benchmark shows that our approach achieves a reduction of up to 46% in execution time.","PeriodicalId":365939,"journal":{"name":"2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudCom2018.2018.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Spark is an in-memory big data analytics framework which has replaced Hadoop as the de facto standard for processing big data in cloud platforms. These frameworks run on cloud platforms where heterogeneity is a common scenario. Heterogeneity gets introduced due to the failure, addition or upgradation of nodes in the cloud platforms. It can arise from various factors such as variation in the number of CPU cores, amount of memory, disk read/write latencies across the nodes, etc. These factors have a significant impact on the performance of Spark jobs. Spark supports execution of a job on equal-sized executors which can result in under allocation of resources in a heterogeneous cluster. Insufficient resources can severely degrade the performance of CPU and memory intensive applications like machine learning, graph processing, etc. Existing techniques use equal-sized executors which can degrade the performance of jobs in heterogeneous environments. In this paper, we propose Sparker, an efficient resource-aware optimization strategy for Spark in heterogeneous clusters. It overcomes the limitation of heterogeneity in terms of CPU and memory resources by modifying the size of the executor. The executors are re-sized based on the available resources of the node. We have modified Spark source code to incorporate executor re-sizing strategy. Experimental evaluation on SparkBench benchmark shows that our approach achieves a reduction of up to 46% in execution time.
Sparker:为异构集群优化Spark
Spark是一个内存中的大数据分析框架,它已经取代Hadoop成为云平台中处理大数据的事实上的标准。这些框架运行在异构是常见场景的云平台上。由于云平台中节点的故障、增加或升级,会引入异构性。它可以由各种因素引起,例如CPU内核数量的变化、内存量、跨节点的磁盘读/写延迟等。这些因素对Spark作业的性能有很大影响。Spark支持在大小相等的执行器上执行作业,这可能导致异构集群中的资源分配不足。资源不足会严重降低CPU和内存密集型应用程序(如机器学习、图形处理等)的性能。现有技术使用大小相等的执行器,这会降低异构环境中作业的性能。在本文中,我们提出了Sparker,一种高效的资源感知优化策略,用于异构集群中的Spark。通过修改执行器的大小,它克服了CPU和内存资源异构性的限制。执行器将根据节点的可用资源重新调整大小。我们已经修改了Spark源代码,以纳入执行器大小调整策略。在SparkBench基准测试上的实验评估表明,我们的方法在执行时间上减少了高达46%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信