Heterogeneous MacroTasking (HeMT) for Parallel Processing in the Cloud

Y. Shan, G. Kesidis, Aman Jain, B. Urgaonkar, J. Khamse-Ashari, I. Lambadaris
{"title":"Heterogeneous MacroTasking (HeMT) for Parallel Processing in the Cloud","authors":"Y. Shan, G. Kesidis, Aman Jain, B. Urgaonkar, J. Khamse-Ashari, I. Lambadaris","doi":"10.1145/3429885.3429962","DOIUrl":null,"url":null,"abstract":"Using tiny tasks (microtasks) has long been regarded an effective way of load balancing in parallel computing systems. When combined with containerized execution nodes pulling in work upon becoming idle, microtasking has the desirable property of automatically adapting its load distribution to the processing capacities of participating nodes - more powerful nodes finish their work sooner and, therefore, pull in additional work faster. As a result, microtasking is deemed especially desirable in settings with heterogeneous processing capacities and poorly characterized workloads. However, microtasking does have additional scheduling and I/O overheads that may make it costly in some scenarios. Moreover, the optimal task size generally needs to be learned. We herein study an alternative load balancing scheme - Heterogeneous MacroTasking (HeMT) - wherein workload is intentionally skewed according to the nodes' processing capacity. We implemented and open-sourced a prototype of HeMT within the Apache Spark application framework and conducted experiments using the Apache Mesos cluster manager. It's shown experimentally that when workload-specific estimates of nodes' processing capacities are learned, Spark with HeMT offers up to 10% shorter average completion times for realistic, multistage data-processing workloads over the baseline Homogeneous microTasking (HomT) system.","PeriodicalId":205652,"journal":{"name":"Proceedings of the 2020 6th International Workshop on Container Technologies and Container Clouds","volume":"191 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 6th International Workshop on Container Technologies and Container Clouds","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3429885.3429962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Using tiny tasks (microtasks) has long been regarded an effective way of load balancing in parallel computing systems. When combined with containerized execution nodes pulling in work upon becoming idle, microtasking has the desirable property of automatically adapting its load distribution to the processing capacities of participating nodes - more powerful nodes finish their work sooner and, therefore, pull in additional work faster. As a result, microtasking is deemed especially desirable in settings with heterogeneous processing capacities and poorly characterized workloads. However, microtasking does have additional scheduling and I/O overheads that may make it costly in some scenarios. Moreover, the optimal task size generally needs to be learned. We herein study an alternative load balancing scheme - Heterogeneous MacroTasking (HeMT) - wherein workload is intentionally skewed according to the nodes' processing capacity. We implemented and open-sourced a prototype of HeMT within the Apache Spark application framework and conducted experiments using the Apache Mesos cluster manager. It's shown experimentally that when workload-specific estimates of nodes' processing capacities are learned, Spark with HeMT offers up to 10% shorter average completion times for realistic, multistage data-processing workloads over the baseline Homogeneous microTasking (HomT) system.
云中并行处理的异构宏任务(HeMT)
使用微任务一直被认为是并行计算系统负载平衡的有效方法。当与容器化执行节点结合使用时,微任务具有自动调整其负载分布以适应参与节点的处理能力的理想特性—更强大的节点更快地完成其工作,因此更快地吸收额外的工作。因此,在具有异构处理能力和特征不佳的工作负载的设置中,微任务被认为是特别可取的。然而,微任务确实有额外的调度和I/O开销,这在某些情况下可能会使其代价高昂。此外,最优任务大小通常需要学习。本文研究了另一种负载平衡方案——异构宏任务(HeMT),其中工作负载根据节点的处理能力被有意地倾斜。我们在Apache Spark应用程序框架内实现并开源了HeMT的原型,并使用Apache Mesos集群管理器进行了实验。实验表明,当了解到节点处理能力的特定工作负载估计时,与基准均匀微任务(HomT)系统相比,具有HeMT的Spark在实际的多阶段数据处理工作负载上的平均完成时间缩短了10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信