In the Quest of Trade-off between Job Parallelism and Throughput in Hadoop: A Stochastic Learning Approach for Parameter Tuning on the Fly

Ramesh Pokhrel, A. Rauniyar, A. Yazidi
{"title":"In the Quest of Trade-off between Job Parallelism and Throughput in Hadoop: A Stochastic Learning Approach for Parameter Tuning on the Fly","authors":"Ramesh Pokhrel, A. Rauniyar, A. Yazidi","doi":"10.1109/PDCAT46702.2019.00086","DOIUrl":null,"url":null,"abstract":"With the emergence of the concept of big data, Hadoop MapReduce has been the de facto standard programming model for processing a large amount of data stored on the different cluster nodes in a distributed manner. It is known that the implementation of MapReduce operation with the default configuration yields a low number of parallel running jobs. In fact, poor resource utilization and overall low performance are usually induced by the default configuration. Although a myriad of works has been carried out in the literature for optimally configuring Hadoop MapReduce, the absolute vast majority of those works only consider offline and static configuration. Those approaches are clearly ineffective as the load might change during execution requiring tuning again the configuration parameters. In this work, we rather focus on dynamical and adaptively configuring Hadoop MapReduce by changing the system level Maximum Application Master Resource in Percent (MARP) parameter on the fly. We show that adaptively tuning the MARP parameter yields a good trade-off between job parallelism and throughput. To achieve this, an optimal design which we call Adaptive Parameter Tuning of Hadoop (APTH) based on a novel variant of the Tsetlin Automata is devised. Comprehensive experimental results show that the resources are optimally and appropriately utilized, resulting in better job parallelism and throughput. Furthermore, it is found that our APTH approach spends 47% less time for job execution as compared to the default configuration.","PeriodicalId":166126,"journal":{"name":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"88 36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT46702.2019.00086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the emergence of the concept of big data, Hadoop MapReduce has been the de facto standard programming model for processing a large amount of data stored on the different cluster nodes in a distributed manner. It is known that the implementation of MapReduce operation with the default configuration yields a low number of parallel running jobs. In fact, poor resource utilization and overall low performance are usually induced by the default configuration. Although a myriad of works has been carried out in the literature for optimally configuring Hadoop MapReduce, the absolute vast majority of those works only consider offline and static configuration. Those approaches are clearly ineffective as the load might change during execution requiring tuning again the configuration parameters. In this work, we rather focus on dynamical and adaptively configuring Hadoop MapReduce by changing the system level Maximum Application Master Resource in Percent (MARP) parameter on the fly. We show that adaptively tuning the MARP parameter yields a good trade-off between job parallelism and throughput. To achieve this, an optimal design which we call Adaptive Parameter Tuning of Hadoop (APTH) based on a novel variant of the Tsetlin Automata is devised. Comprehensive experimental results show that the resources are optimally and appropriately utilized, resulting in better job parallelism and throughput. Furthermore, it is found that our APTH approach spends 47% less time for job execution as compared to the default configuration.
在Hadoop中寻求作业并行性和吞吐量之间的权衡:一种动态参数调优的随机学习方法
随着大数据概念的出现,Hadoop MapReduce已经成为事实上的标准编程模型,用于以分布式方式处理存储在不同集群节点上的大量数据。众所周知,使用默认配置实现MapReduce操作会产生少量并行运行作业。事实上,低资源利用率和整体低性能通常是由默认配置引起的。尽管文献中已经进行了大量的优化配置Hadoop MapReduce的工作,但这些工作中的绝大多数只考虑离线和静态配置。这些方法显然是无效的,因为负载可能在执行期间发生变化,需要再次调优配置参数。在这项工作中,我们更关注于动态地、自适应地配置Hadoop MapReduce,通过动态地改变系统级最大应用主资源百分比(MARP)参数。我们表明,自适应调整MARP参数在作业并行性和吞吐量之间产生了良好的权衡。为了实现这一目标,我们设计了一种基于Tsetlin自动机的新变体的Hadoop自适应参数调整(APTH)优化设计。综合实验结果表明,资源得到了合理的优化利用,提高了作业并行度和吞吐量。此外,与默认配置相比,我们的APTH方法在作业执行上花费的时间减少了47%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信