Scalable Shared-Memory Multiprocessing [Book Reviews]

J. Zalewski
{"title":"Scalable Shared-Memory Multiprocessing [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494608","DOIUrl":null,"url":null,"abstract":"~ This book is primarily devoted to Dash (Directorykchitecture for Shared Memory), a multiprocessor system known from earlier publications (see “The Stanford Dash Multiprocessor,” Computer, Mar. 1992, Vol. 3 5 , No. 3 , pp. 63-79). The book also provides readers with a comprehensive view of modem multiprocessing, as it describes where the technology is actually heading. The major issue in multiprocessor architectures is communication: how multiple processors communicate with each other. Not so long ago, buses were the major component tying various computational pieces together. Multiple processors used a bus to access common memory or to communicate with separate memories, which caused a communication bottleneck. Strictly speaking, the problems started when users wanted to extend existing systems with several processors to much larger aggregates of dozens or even hundreds of processing units. In such cases, even hierarchically organized buses began to saturate, and designers faced a scalability barrier. Moving from a bus to a point-to-point network was an immediate solution, but then old problems persisted and new ones arose, such as cache coherence. One approach was to maintain shared memory (common address space) along the bus or across the network, without cache coherence. Another relied on message passing, but in both cases the memory latency problem emerged. Technological developments soon made possible widespread use of caches, and then other problems started. Maintaining cache coherence across the bus (let alone the entire network) is not trivial, and most designers lost their hair before coming up with satisfactory solutions. This book is a concentrated effort to address such problems and provide a solution to maintain cache coherence across the pointto-point network of multiple processors. The authors call it scalable shared-memory multiprocessing (SSMP). The book’s three parts are General Concepts, Experience with Dash, and Future Trends. The first is the most interesting. It is mainly a histarical perspective on multiprocessor systems. The book first discusses scalability problems in detail, concluding that hardware cache coherence is a key to high performance. T o ensure scalability, one must apply point-topoint interconnections (as opposed to a bus) and base cache coherence on directory schemes. Scalability has three dimensions: How does the performance scale? That is, what speedup (in terms of execution time) can we achieve by using Nprocessors over a single processor for the same problem? How does the cost scale when more processors are added? What is the largest number of processors for which multiprocessing rather than uniprocessing is still advantageous? That is, what is the range of scalability?","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Parallel & Distributed Technology: Systems & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/M-PDT.1996.494608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

~ This book is primarily devoted to Dash (Directorykchitecture for Shared Memory), a multiprocessor system known from earlier publications (see “The Stanford Dash Multiprocessor,” Computer, Mar. 1992, Vol. 3 5 , No. 3 , pp. 63-79). The book also provides readers with a comprehensive view of modem multiprocessing, as it describes where the technology is actually heading. The major issue in multiprocessor architectures is communication: how multiple processors communicate with each other. Not so long ago, buses were the major component tying various computational pieces together. Multiple processors used a bus to access common memory or to communicate with separate memories, which caused a communication bottleneck. Strictly speaking, the problems started when users wanted to extend existing systems with several processors to much larger aggregates of dozens or even hundreds of processing units. In such cases, even hierarchically organized buses began to saturate, and designers faced a scalability barrier. Moving from a bus to a point-to-point network was an immediate solution, but then old problems persisted and new ones arose, such as cache coherence. One approach was to maintain shared memory (common address space) along the bus or across the network, without cache coherence. Another relied on message passing, but in both cases the memory latency problem emerged. Technological developments soon made possible widespread use of caches, and then other problems started. Maintaining cache coherence across the bus (let alone the entire network) is not trivial, and most designers lost their hair before coming up with satisfactory solutions. This book is a concentrated effort to address such problems and provide a solution to maintain cache coherence across the pointto-point network of multiple processors. The authors call it scalable shared-memory multiprocessing (SSMP). The book’s three parts are General Concepts, Experience with Dash, and Future Trends. The first is the most interesting. It is mainly a histarical perspective on multiprocessor systems. The book first discusses scalability problems in detail, concluding that hardware cache coherence is a key to high performance. T o ensure scalability, one must apply point-topoint interconnections (as opposed to a bus) and base cache coherence on directory schemes. Scalability has three dimensions: How does the performance scale? That is, what speedup (in terms of execution time) can we achieve by using Nprocessors over a single processor for the same problem? How does the cost scale when more processors are added? What is the largest number of processors for which multiprocessing rather than uniprocessing is still advantageous? That is, what is the range of scalability?
可扩展的共享内存多处理[书评]
这本书主要致力于Dash(共享内存的目录架构),这是一个从早期出版物中已知的多处理器系统(参见“斯坦福Dash多处理器”,计算机,1992年3月,第35卷,第3号,第63-79页)。本书还为读者提供了现代多处理的全面视图,因为它描述了该技术的实际方向。多处理器体系结构中的主要问题是通信:多个处理器如何相互通信。不久以前,总线是将各种计算部件连接在一起的主要组件。多个处理器使用总线访问公共内存或与单独的内存通信,这会导致通信瓶颈。严格地说,当用户想要将具有几个处理器的现有系统扩展到具有数十甚至数百个处理单元的更大的集合时,问题就开始了。在这种情况下,甚至分层组织的总线也开始饱和,设计人员面临着可伸缩性障碍。从总线转移到点对点网络是一个立竿见影的解决方案,但是老问题仍然存在,新的问题也出现了,比如缓存一致性。一种方法是沿总线或跨网络维护共享内存(公共地址空间),而不需要缓存一致性。另一种依赖于消息传递,但在这两种情况下都会出现内存延迟问题。技术的发展很快使缓存的广泛使用成为可能,然后其他问题开始出现。在总线上保持缓存一致性(更不用说整个网络了)并不是一件微不足道的事情,大多数设计人员在提出令人满意的解决方案之前就已经不知所措了。这本书是一个集中的努力,以解决这些问题,并提供了一个解决方案,以维持跨多处理器的点对点网络的缓存一致性。作者称之为可扩展共享内存多处理(SSMP)。这本书的三个部分是一般概念,经验与Dash和未来的趋势。第一个是最有趣的。它主要是从历史的角度来看待多处理器系统。本书首先详细讨论了可伸缩性问题,得出结论认为硬件缓存一致性是高性能的关键。为了确保可伸缩性,必须在目录方案上应用点对点互连(而不是总线)和基本缓存一致性。可伸缩性有三个维度:性能如何扩展?也就是说,对于同一个问题,我们可以通过在单个处理器上使用Nprocessors来实现什么加速(在执行时间方面)?当增加更多的处理器时,成本是如何变化的?多处理比单处理更有优势的处理器的最大数量是多少?也就是说,可伸缩性的范围是什么?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信