Memory Abstraction and Optimization for Distributed Executors

S. Sahin, Ling Liu, Wenqi Cao, Qi Zhang, Juhyun Bae, Yanzhao Wu
{"title":"Memory Abstraction and Optimization for Distributed Executors","authors":"S. Sahin, Ling Liu, Wenqi Cao, Qi Zhang, Juhyun Bae, Yanzhao Wu","doi":"10.1109/CIC50333.2020.00019","DOIUrl":null,"url":null,"abstract":"This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.","PeriodicalId":265435,"journal":{"name":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC50333.2020.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.
分布式执行器的内存抽象与优化
本文介绍了一套用于分布式执行器的内存抽象和优化技术,重点展示了Spark执行器的性能优化机会,通过利用Spark的基本核心弹性分布式数据集(rdd), Spark执行器的性能优于Hadoop MapReduce执行器。本文有三个原创性贡献。首先,我们展示了当RDD太大而无法装入内存时,Spark上的应用程序会经历严重的性能下降,从而导致内存利用不平衡和过早溢出。其次,我们开发了一套技术来指导Spark执行器中RDD的配置,旨在优化Spark执行器上迭代ML工作负载的性能,当它们分配的内存足以用于RDD缓存时。第三,我们设计了一个轻量级RDD优化器DAHI。DAHI为Spark提供了三个增强功能:(i)使用弹性执行器,而不是固定大小的JVM执行器;(ii)通过启用部分RDD缓存来支持粗粒度任务和大型RDD;(iii)在本地节点上主RDD缓存不足的情况下,自动利用远程内存进行辅助RDD缓存。在机器学习和图形处理基准测试上的大量实验表明,使用DAHI, Spark上的ML工作负载和应用程序的性能提高了12.4倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信