Load Balancing

R. McConnell
{"title":"Load Balancing","authors":"R. McConnell","doi":"10.1109/EMPDP.1994.592502","DOIUrl":null,"url":null,"abstract":"As the price/performance ratio of parallel computers continues to fall many applications developers such as scientist and engineers with large computational problems are looking to non von Neuman computer technology to provide powerful throughput platforms. While the hardware technology behind such platforms are well matured the software environments do not provide the functionality necessary to allow ordinary applications developers to utilize them efficiently. One particular problem, which requires the development of new and innovative techniques, is the mapping of the work of a potentially concurrent computation to the processors of a multiprocessor system. Adjusting this mapping in order to complete the workload in the minimum possible time (i.e. to share the workload among the processors evenly and minimize inter processor communication) is known as load balancing. Deciding on the mapping before execution based on compile time information is known as static load balancing while adjust the mapping during the execution (via process migration) is known as dynamic load balancing. The papers in this session incorporate techniques for mapping processes both before and during execution in order to maintain an effective load balance. However one of the papers deals with the distributed computing environment while the other is in the area of object oriented programming environments for multiprocessors. The first paper in this session, entitled “The Efficient Management of Task Clusters in a Dynamic Load Balancer”, describes work being carried out on load balancing of multiuser disthbuted systems. A novel technique is proposed which deal with groups of subtasks, known as task clusters rather than single task units. This provides the advantage of letting the user submit several tasks, in a script type format, to the load balancer. The load balancer, which consists of a load manager running on each node in the system, can then distribute the subtasks across the nodes thus executing the task cluster in parallel. Allowing task clusters to be submitted to the load balancing system gives increase efficiency over submitting tasks separately. Currently two strategies for task cluster management are being considered by the authors. The two alternatives are based on extensions to either a bidding strategy or a probing strategy. The paper will compare the use of these two options. In addition the load balancing scheme has been implemented across a network of workstations and performance results from experiments which compare the scheme described in the paper with an old scheme will be included The second paper which is entitled “The Benefits of Migration in a Parallel Objects Programming Environment” deals with load balancing of distributed memory multiprocessors used for object oriented programming. The parallel objects environment is based on the active object model. Parallel object applications can be highly dynamic as new objects and new threads of execution within objects can be created at run time. A scheme which automatically handles the load balancing of parallel object applications, using both static and dynamic techniques, is presented. Objects available at the start of the execution are grouped in clusters according to communication between them and then allocated to processors so that each has approximately the same computation and memory load. This is optimized using a branch and bound algorithm. During execution each node has a monitoring manager, allocation manager and creation manager as well as a router which handles the placement of new objects as well as the migration of existing objects. The paper compares the performance of parallel objects applications with and without the load balancing system running on a Meiko computing surface.","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"5 1","pages":"42-"},"PeriodicalIF":0.0000,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EMPDP.1994.592502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

Abstract

As the price/performance ratio of parallel computers continues to fall many applications developers such as scientist and engineers with large computational problems are looking to non von Neuman computer technology to provide powerful throughput platforms. While the hardware technology behind such platforms are well matured the software environments do not provide the functionality necessary to allow ordinary applications developers to utilize them efficiently. One particular problem, which requires the development of new and innovative techniques, is the mapping of the work of a potentially concurrent computation to the processors of a multiprocessor system. Adjusting this mapping in order to complete the workload in the minimum possible time (i.e. to share the workload among the processors evenly and minimize inter processor communication) is known as load balancing. Deciding on the mapping before execution based on compile time information is known as static load balancing while adjust the mapping during the execution (via process migration) is known as dynamic load balancing. The papers in this session incorporate techniques for mapping processes both before and during execution in order to maintain an effective load balance. However one of the papers deals with the distributed computing environment while the other is in the area of object oriented programming environments for multiprocessors. The first paper in this session, entitled “The Efficient Management of Task Clusters in a Dynamic Load Balancer”, describes work being carried out on load balancing of multiuser disthbuted systems. A novel technique is proposed which deal with groups of subtasks, known as task clusters rather than single task units. This provides the advantage of letting the user submit several tasks, in a script type format, to the load balancer. The load balancer, which consists of a load manager running on each node in the system, can then distribute the subtasks across the nodes thus executing the task cluster in parallel. Allowing task clusters to be submitted to the load balancing system gives increase efficiency over submitting tasks separately. Currently two strategies for task cluster management are being considered by the authors. The two alternatives are based on extensions to either a bidding strategy or a probing strategy. The paper will compare the use of these two options. In addition the load balancing scheme has been implemented across a network of workstations and performance results from experiments which compare the scheme described in the paper with an old scheme will be included The second paper which is entitled “The Benefits of Migration in a Parallel Objects Programming Environment” deals with load balancing of distributed memory multiprocessors used for object oriented programming. The parallel objects environment is based on the active object model. Parallel object applications can be highly dynamic as new objects and new threads of execution within objects can be created at run time. A scheme which automatically handles the load balancing of parallel object applications, using both static and dynamic techniques, is presented. Objects available at the start of the execution are grouped in clusters according to communication between them and then allocated to processors so that each has approximately the same computation and memory load. This is optimized using a branch and bound algorithm. During execution each node has a monitoring manager, allocation manager and creation manager as well as a router which handles the placement of new objects as well as the migration of existing objects. The paper compares the performance of parallel objects applications with and without the load balancing system running on a Meiko computing surface.
负载平衡
随着并行计算机的性价比不断下降,许多应用开发人员,如科学家和工程师,正在寻求非冯·诺伊曼计算机技术来提供强大的吞吐量平台。虽然这些平台背后的硬件技术非常成熟,但软件环境并没有提供允许普通应用程序开发人员有效利用它们所必需的功能。需要开发新的创新技术的一个特殊问题是将潜在并发计算的工作映射到多处理器系统的处理器。调整此映射以在尽可能短的时间内完成工作负载(即在处理器之间均匀地共享工作负载并最小化处理器间通信)称为负载平衡。根据编译时信息在执行前决定映射称为静态负载平衡,而在执行期间(通过进程迁移)调整映射称为动态负载平衡。本次会议的论文包含了在执行之前和执行期间映射流程的技术,以保持有效的负载平衡。然而,其中一篇论文是关于分布式计算环境的,而另一篇是关于多处理器面向对象编程环境的。本次会议的第一篇论文题为“动态负载平衡器中任务集群的有效管理”,描述了在多用户分布式系统中进行的负载平衡工作。提出了一种处理子任务组的新技术,称为任务集群,而不是单个任务单元。这样做的好处是允许用户以脚本类型格式向负载平衡器提交多个任务。负载均衡器由在系统中每个节点上运行的负载管理器组成,然后可以跨节点分发子任务,从而并行执行任务集群。允许将任务集群提交给负载平衡系统可以提高单独提交任务的效率。目前,作者正在考虑两种任务集群管理策略。这两种选择是基于对投标策略或探测策略的扩展。本文将比较这两种选择的使用。此外,负载均衡方案已在工作站网络上实现,并将实验结果与本文所描述的方案进行了比较。第二篇论文题为“并行对象编程环境中迁移的好处”,涉及面向对象编程中使用的分布式内存多处理器的负载均衡。并行对象环境是基于活动对象模型的。并行对象应用程序可以是高度动态的,因为可以在运行时创建新对象和对象内的新执行线程。提出了一种采用静态和动态技术自动处理并行对象应用程序负载平衡的方案。在执行开始时可用的对象根据它们之间的通信在集群中分组,然后分配给处理器,以便每个处理器具有大致相同的计算和内存负载。这是使用分支定界算法进行优化的。在执行期间,每个节点都有一个监视管理器、分配管理器和创建管理器,以及一个路由器,它处理新对象的放置和现有对象的迁移。本文比较了在Meiko计算面上运行有负载均衡系统和没有负载均衡系统的并行对象应用程序的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信