IEEE Parallel & Distributed Technology: Systems & Applications最新文献

筛选
英文 中文
Solaris Multithreaded programming guide [Book Reviews] Solaris多线程编程指南[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532146
J. Zalewski
{"title":"Solaris Multithreaded programming guide [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532146","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532146","url":null,"abstract":"chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th E d , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel digital improvements of neural networks [Book Reviews] 神经网络的并行数字改进[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532144
R. Tadeusiewicz
{"title":"Parallel digital improvements of neural networks [Book Reviews]","authors":"R. Tadeusiewicz","doi":"10.1109/M-PDT.1996.532144","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532144","url":null,"abstract":"Neural networks have increased not only in the number of applications but also in complexity. This increase in complexity has created a tremendous need for computational power, perhaps more power than conventional scalar processors can deliver efficiently. Such processors are oriented toward numeric and data manipulation. Neurocomputing requirements (such as nonprogramming and learning) impose different constraints and demands on the computer architectures and on the structure of multicomputer systems. We need new neurocomputers, dedicated to neural networks applications. This is the scope of Parallel Digital Implementations of Neural Networks. T h e surge of interest in neural networks, which started in the mid-eighties, stemmed largely from advances in VLSI technology. But hardware implementations of neural networks are still not as popular as the software tools for neural network modeling, learning, and applications. Information on hardware neural network implementations is still too limited and exotic for many neural network users. This book fills an important gap for such users. Neural networks have recently become such a subject of great interest to so many scientists, engineers, and smdents that you can easily find many books and papers about implementations (for example, Analogue Neural VLSI, by A. Murray and L. Tarassenko, Chapman & Hall; Neurocomputers: An Overview o f Neural Networks in VLSI, by M. Glesner and W. Poechmueller, Chapman & Hall; and VLSIfor Neural Networks and Art-ificial Intelligence, byJ.G. Delgado-Frias and W.R. Moore, Plenum Press). However, this book is different. It is wellfocused; it does not discuss all forms of VLSI neural network implementations, but presents only the most interesting and most important: parallel digital implementations. No analog circuits, no serial architecrures, no computer models. Only digital devices (general-purpose processors, such as array processors and DSP chips, or dedicated systems such as neurocomputers or digital neurochips), and only parallel solutions. This narrow focus is good, because the digital implementations of neural networks provide advantages such as freedom from noise, programmability, higher precision, and reliable storage devices. The book has three main sections:","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiprocessor system architectures [Book Reviews] 多处理器系统架构[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532141
J. Zalewski
{"title":"Multiprocessor system architectures [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532141","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532141","url":null,"abstract":"reviewed by Junusz Zuleu:skz, Em b7y-Riddle Aeronautical [Jniversity This book, part of the SunSoft Press series, is subtitled \" A Technical Survey of Multi-processor/Multithreaded Systems Using Sparc, Multilevel Bus Architectures and Solaris (SunOS). \" So, it covers only computer systems from Sun Microsystem:s Computer Corporation. Its purpose is \" to bring together in one volume a coherent description of the elements that provide for the design and development of multiprocessor systems archi-tectures from Sun Microsystems. \" It assumes that the reader understands computer architecture. As the subtitle suggests, the book progresses smoothly from processor hardware and its implementations to bus architectures, to low-level programming that includes threads and lightweight processes, and to complete systems. The book starts with general material on multiprocessing and on using Sun implementations. Ben Catanzaro correctly observes that because of physical limitations in malung chips faster, system performance will depend more and more on advances in computer architecture and in operating systems technology. This clears the way to using multiple processors. He briefly explains symmetric multiprocessing (SMP), where each processor shares the kernel image in memory and can execute its code concurrently, and asym-metric multiprocessing (ASMP), based on a masterlslave relationship between participating processors. The book also outlines the Sun solution for SMP: Sparc-CPU modules equipped with caches tied to an interconnect bus, to which 110 subsystem and physical memory connect separately. Next, the book describes the Sparc architecture and its unique register window model, compares versions 7, 8, and 9 of the Sparc specifications, and outlines Sparc chip imple-._______________~_ ~-mentations, including a brief note on Ultra-Sparc. It then outlines the Sparc memory model, explaining the differences between total-store ordering and partial-store ordering , and describes the memory management unit in detail. The next major subject is bus architectures. MBus (fully specified in the 58-page appendix) is a processor-to-memory bus, optimized for high-speed connection of the Sparc-CPU modules to physical memory and special U 0 modules. Its Level 2 protocol provides for cache-coherent shared-memory multipro-cessing and supports six transactions (ordinary read/write and four transactions supporting cache coherence: coherent read, coherent invalidate, coherent read & invalidate, and coherent write & invalidate). Its basic characteristics include multiplexed address/control with 64 bits of data and 36 bits of physical addressing, centralized arbitration, and up to 128-byte burst transfers. A chapter on designing shared-memory multiprocessor systems with MBus provides many useful details regarding cache-coherence protocols (mostly, MBus implementation of a …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"528 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124500175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Programming with threads [Book Reviews] 用线程编程[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532148
J. Zalewski
{"title":"Programming with threads [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532148","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532148","url":null,"abstract":"and Posix, and with a discussion of barriers, events, and spin locks. They also briefly present such problems as deadlocks, race conditions, priority inversion, and reentrancy. The discussion of race conditions, in this part and in a later section, is very interesting, although I spotted one error. A variable doubled in one thread and decremented in another gives two different results, depending on the threads’ order of execution. Contrary to what the authors say, this is not a race condition but an ordinary design error. This is followed by a discussion of Posix calls not available in the Solaris thread library-that is, those related to thread attributes, thread cancellation, and scheduling policies. Next, Lewis and Berg describe several tools for multithreaded programming and offer some programming hints. The chapter on examples that follows is technically the moslinteresting part of the book, because of the level of details covered. Two of the appendixes present a very valuable list of all calls for the Solaris threads library and for Posix. T h e authors discuss each call individually, unlike most books on Unix, which just provide manpage (manual page) descriptions. The authors wrote Threads Primer: A Guide to Multithreaded Propamming as an introductory text to give experienced C/Unix programmers a solid understanding of multithreading fundamentals. The book achieves this goal, but lessexperienced programmers can also benefit from it. However, be warned: the less “technical” you are, the less you will gain.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121620196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable distributed computing with the Isis toolkit [Book Reviews] 使用Isis工具包的可靠分布式计算[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532142
F. Reynolds
{"title":"Reliable distributed computing with the Isis toolkit [Book Reviews]","authors":"F. Reynolds","doi":"10.1109/M-PDT.1996.532142","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532142","url":null,"abstract":"with the /Sf5 Toolkit edited by Kenneth P Birman and Robbert Van Renesse 398 PP $50 IEEE Computer Society Press Los Alamtos, Calif 1994 ISBN 0-81 86-5342-6 features, barely mentioning the host-based and symmetric configurations and not mentioning direct virtual memory addressing, a feature unique among buses. The book also discusses SBus’s operation in a hierarchy with MBus. An outline follows of two other buses in a hierarchy, XBus and XDbus, developed jointly by Sun and Xerox. Both are packetswitched buses, which enable data-routing during transfer rather than before, unlike all other circuit-switched buses. XBus is primarily a chip interconnect; XDbus can be used at the chip, board, or backplane level. T o maintain multiprocessor cache coherence, XDbus provides a hardware protocol that is a generalization of the multicopy write-broadcast protocol. Other interesting features include use of Gunning Transceiver Logic (GTL) transceiver technology, a separate transaction (rather than dedicated lines) to transport interrupts, and full support for the SWAP synchronization primitive. Two chapters on software complement the material on Sun’s approach to symmetric multiprocessing. One discusses a general model o f a multithreaded architecture used in Solaris for threads, lightweight processes, and kernels. Another covers programming facilities and their use at the application level: mutexes, condition variables, semaphores, readedwriter locks, and signals. T h e book ends with a chapter on three Sun multiprocessor implementationsSparcServer 600MP, SparcCenter 2000, and SparcServer 1000--and with a chapter on future trends, the weakest in the whole book, because it’s very nontechnical and superficial. Multiprocessor System Architectures can serve as an overview of the Sun technology as well as a reference handbook jor designers of multiprocessor systems based on Sun machines. However, those who need details about particular subjects should refer to other publications, such as The Sparc Architecture Manual, edited by David L. Waever and Tom Germond (Prentice Hall); S B w Handbook, by Susan A. Mason (Prentice Hall); Solaris 2.X Intemzals and Architecturtz, by John R. Graham (McGraw-Hill); and Th:reads Primer: A Guide t o Multithreaded Programming, by Bil Lewis and Daniel J. Berg (Primtice Hall) (see the review on page 76 of this issue). My only other complaint is that this book unnecessarily uses sales language; it is too often hard to distinguish commercial propaganda from valuable technical information.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114350861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Operating systems programming: the SR programming language [Book Reviews] 操作系统编程:SR编程语言[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532145
G. Lippman
{"title":"Operating systems programming: the SR programming language [Book Reviews]","authors":"G. Lippman","doi":"10.1109/M-PDT.1996.532145","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532145","url":null,"abstract":"Operating Systems Programming is a selfcontained guide to classic operating system problems, concurrent programming, and the Synchronizing Resources language. SR, based on C and Pascal, is very understandable to readers with programming knowledge. So, I recommend this book to students studying operating systems and to programmers interested in learning concurrent programming and studying these problems and their solutions in a readily accessible working language. (SR, developed a t the University of Arizona, is fully described in The SR Language, C o m ~ remy in Practice, by Gregory R. Andrews and Ronald A. Olsson [BenjamidCmmings]. For more information on SR, access http:// www.cs.arizona.edu/sr. The compiler and utilities, available by anonymous ftp at ftp:// ftp.cs.arizona.edu/sr, are readily installed on computer systems running Unix, such as a networked Sun system, or on PCs running Linux. Linux is also available by anonymous ftp, at ftp://sunsite.unc.edu/pub/Linux, or on C D ROM. For more information on Linux, access http://www.linux.org.) Stephen Hartley has skillfully woven together a description of the SR language and SR solutions of several classic OS problems, with emphasis on the mutual exclusion of concurrent processes, race conditions, critical sections, process synchronization, interprocess communication, and parallel computing. These solutions use semaphores, monitors, and message-passing techniques on singleand multiple-CPU computer systems. (The solutions are also available by anonymous ftp, to be compiled and run by the reader.) The book has seven chapters, followed by a list of the example programs and a bibliography. Each chapter contains descriptive information, SR programs for solving the OS problems, and laboratory exercises designed to extend these solutions. Chapter 1 reviews OS programming, hardware and software interrupts, hardware protection, and CPU scheduling. Chapter 2 presents SRs sequential features first, so that readers who have not previously written concurrent or parallel programs can see how closely SR resembles the languages they already know. Elementary programs for computing factorial, sorting, and string manipulation make the presentation very concrete. Hartley demonstrates how to use Unix command-line arguments in an SR program, and describes and uses the SR resource, which is effectively equivalent to the object or module in other languages. He then shows how to animate SR programs with the Xtango software system developed by John T. Stasko and Doug Hayes. Xtango has been implemented effcctively on Unixand Linux-based computers. (Xtango is available by anonymous ftp from Georgia Tech University at ftp.cc.gatech.edu/pub/people/stasko.) Chapter 3 introduces concurrent programming in which multiple processes manipulate shared data. T o preserve data integrity, solution of the critical section problem enforces mutual exclusion of the processes relative to this data. Hartley shows how several processes can ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114607114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Threads primer [Book Reviews] 线程入门[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532147
J. Zalewski
{"title":"Threads primer [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532147","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532147","url":null,"abstract":"chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th Ed , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"487 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123191641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCE: A guide to developing portable applications [Book Reviews] DCE:开发便携式应用程序的指南[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-23 DOI: 10.1109/M-PDT.1996.532143
E. Sorton
{"title":"DCE: A guide to developing portable applications [Book Reviews]","authors":"E. Sorton","doi":"10.1109/M-PDT.1996.532143","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532143","url":null,"abstract":"years these members have written about their research's technical details and its problem domain or context. Consequently , Birman and Van Renesse were able to select from a rich body of work. The book has 2 1 chapters, which are divided into four sections. The \" Fundamentals \" section introduces the problems Isis is intended to deal with and the Isis approach's general nature. This section defines and discusses at length the virtual synchrony programming model of distributed systems. Two chapters deal with controversies. One argues RPC's inadequacy as a, tool for constructing reliable distributed systems; the other defends the utility of causally ordered group communication. (Readers interested in the honorable opposition's side of the second controversy should read \" Understanding the Limitations of Causally and Totally Ordered Communication, \" by David Cheriton and Dale Skeen, in the 1991 Proceedings of the Symposium on Operating Systems Principles, ACM Press.) \" Redesign, \" the second section, describes the motivation, design, and new research initiatives of Horus. When the book was being written, Horus was very much a work in progress. Nevertheless , this section's chapters capture the spirit of Horus's design, the direction of the ongoing research, and many of the lessons learned during the development of the original Isis Toolkit. The \" Protocol \" section contains chapters detailing the key group-communication and fault-detection protocols on which Isis and Horus are built. These are among the most technically challenging chapters. Readers who are \" notation averse \" might be inclined to skip Chapters 12,13, and 14. I would encourage those who are interested in more than a superficial understanding of how the system works to persevere. As is often the case when dealing with problems associated with distributed consensus, the Isis protocols are not unduly complex, but are in some ways quite subtle. These chapters present the material carefully and, for the most part, straightforwardly. The final section, \" Tools and Applications , \" describes a fairly broad range of applications that have been built with Isis. Meta is a toolht for constructing distributed reactive systems, which include process-control systems. T h e Paralex programming environment is intended to simplify designing and building parallel, distributed programs. The M I S query and reporting system was built for the World Bank's Planning and Budgeting Department. Distributed M L provides distributed computing extensions to standard M L (metalanguage) programming language. Each chapter explains …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125299264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiprocessor Performance Measurement and Evaluation [Book Reviews] 多处理器性能测量与评估[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-22 DOI: 10.1109/M-PDT.1996.494612
J. Zalewski
{"title":"Multiprocessor Performance Measurement and Evaluation [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494612","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494612","url":null,"abstract":"All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128352124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Systems: Concepts and Design [Book Reviews] 分布式系统:概念与设计[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1996-01-22 DOI: 10.1109/M-PDT.1996.494609
J. Madey
{"title":"Distributed Systems: Concepts and Design [Book Reviews]","authors":"J. Madey","doi":"10.1109/M-PDT.1996.494609","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494609","url":null,"abstract":"1, performance. An outline of h design and a prelction of future trends follow In conclusion, the authors make the very valid statement that (‘the future of generalpurpose, hgh-performance multiprocessing belongs to SSMPs. . . . Their obvious advantages in ease of use, performance, and costperformance will make them the clear winner over other alternatives.” I do have one minor criticism: it seems that the list of references is erroneous. For example, the reference LLG] appears before bi], and unreachable enmes are k e d , such as “T.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信