Scheduling and page migration for multiprocessor compute servers

ASPLOS VI Pub Date : 1994-11-01 DOI:10.1145/195473.195485
Rohit Chandra, Scott Devine, Ben Verghese, Anoop Gupta, M. Rosenblum
{"title":"Scheduling and page migration for multiprocessor compute servers","authors":"Rohit Chandra, Scott Devine, Ben Verghese, Anoop Gupta, M. Rosenblum","doi":"10.1145/195473.195485","DOIUrl":null,"url":null,"abstract":"Several cache-coherent shared-memory multiprocessors have been developed that are scalable and offer a very tight coupling between the processing resources. They are therefore quite attractive for use as compute servers for multiprogramming and parallel application workloads. Process scheduling and memory management, however, remain challenging due to the distributed main memory found on such machines. This paper examines the effects of OS scheduling and page migration policies on the performance of such compute servers. Our experiments are done on the Stanford DASH, a distributed-memory cache-coherent multiprocessor. We show that for our multiprogramming workloads consisting of sequential jobs, the traditional Unix scheduling policy does very poorly. In contrast, a policy incorporating cluster and cache affinity along with a simple page-migration algorithm offers up to two-fold performance improvement. For our workloads consisting of multiple parallel applications, we compare space-sharing policies that divide the processors among the applications to time-slicing policies such as standard Unix or gang scheduling. We show that space-sharing policies can achieve better processor utilization due to the operating point effect, but time-slicing policies benefit strongly from user-level data distribution. Our initial experience with automatic page migration suggests that policies based only on TLB miss information can be quite effective, and useful for addressing the data distribution problems of space-sharing schedulers.","PeriodicalId":140481,"journal":{"name":"ASPLOS VI","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"166","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASPLOS VI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/195473.195485","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 166

Abstract

Several cache-coherent shared-memory multiprocessors have been developed that are scalable and offer a very tight coupling between the processing resources. They are therefore quite attractive for use as compute servers for multiprogramming and parallel application workloads. Process scheduling and memory management, however, remain challenging due to the distributed main memory found on such machines. This paper examines the effects of OS scheduling and page migration policies on the performance of such compute servers. Our experiments are done on the Stanford DASH, a distributed-memory cache-coherent multiprocessor. We show that for our multiprogramming workloads consisting of sequential jobs, the traditional Unix scheduling policy does very poorly. In contrast, a policy incorporating cluster and cache affinity along with a simple page-migration algorithm offers up to two-fold performance improvement. For our workloads consisting of multiple parallel applications, we compare space-sharing policies that divide the processors among the applications to time-slicing policies such as standard Unix or gang scheduling. We show that space-sharing policies can achieve better processor utilization due to the operating point effect, but time-slicing policies benefit strongly from user-level data distribution. Our initial experience with automatic page migration suggests that policies based only on TLB miss information can be quite effective, and useful for addressing the data distribution problems of space-sharing schedulers.
多处理器计算服务器的调度和页面迁移
已经开发了几种缓存一致的共享内存多处理器,它们是可伸缩的,并在处理资源之间提供了非常紧密的耦合。因此,它们非常适合用作多道编程和并行应用程序工作负载的计算服务器。然而,由于这些机器上的分布式主内存,进程调度和内存管理仍然具有挑战性。本文研究了操作系统调度和页面迁移策略对此类计算服务器性能的影响。我们的实验是在斯坦福DASH上进行的,这是一种分布式内存缓存相干多处理器。我们表明,对于由顺序作业组成的多编程工作负载,传统的Unix调度策略做得非常差。相比之下,结合集群和缓存关联以及简单的页面迁移算法的策略可以提供多达两倍的性能改进。对于由多个并行应用程序组成的工作负载,我们将在应用程序之间划分处理器的空间共享策略与时间切片策略(如标准Unix或组调度)进行比较。我们表明,由于操作点效应,空间共享策略可以实现更好的处理器利用率,但时间切片策略从用户级数据分布中获益良多。我们对自动页面迁移的初步经验表明,仅基于TLB遗漏信息的策略可能非常有效,并且对于解决空间共享调度器的数据分布问题非常有用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信