{"title":"Dynamic, object-oriented parallel processing","authors":"A. Grimshaw, W. Strayer, P. Narayan","doi":"10.1109/88.218174","DOIUrl":"https://doi.org/10.1109/88.218174","url":null,"abstract":"Mentat, a dynamic, object-oriented parallel-processing system that provides tools for constructing portable, medium-grain parallel software by combining an object-oriented approach with an underlying layered virtual-machine model is described. Mentat's three primary design objectives-high performance through parallel execution, easy parallelism, and software portability across a wide range of platforms-are reviewed. The performance of four applications of Mentat on two platforms-a 32-node Intel iPSC/2 hypercube and a network of 16 Sun IPC Sparcstations-are examined. The applications are DNA and protein sequence comparison, image convolution, Gaussian elimination and partial pivoting, and sparse matrix-vector multiplication. The performance of Mentat in these applications is compared to that of object-oriented parallel-processing systems, compiler-based distributed-memory systems, portable parallel-processing systems, and hand-coded implementations of the same applications.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing a superscalar machine to run vector code","authors":"S. Weiss","doi":"10.1109/88.218177","DOIUrl":"https://doi.org/10.1109/88.218177","url":null,"abstract":"A streamlined vector architecture and the IBM superscalar RISC System/6000 are discussed. It is shown, step-by-step, how each handles the same program. The factors that let vector machines outperform the RS/6000 are identified. Several extensions to the RS/6000 architecture that could help it attain vector-level performance on code with long vectors are proposed.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The extensible services switch in Carnot","authors":"C. Tomlinson, P. Cannata, G. Meredith, D. Woelk","doi":"10.1109/88.218171","DOIUrl":"https://doi.org/10.1109/88.218171","url":null,"abstract":"The Carnot project for developing a flexible framework for integrating heterogeneous information resources and applications, both within and among organizations, is reviewed. The effective use of such systems requires a way to flexibly and efficiently orchestrate related tasks on far-flung computing systems. A central component of the Carnot project, the extensible services switch (ESS), which provides interpretive access to applications and to communications and information resources at distributed sites, is discussed. The ESS is described as essentially a programmable glue that enhances interoperability by binding software components to one another.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123637690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallax: a tool for parallel program scheduling","authors":"T. Lewis, H. El-Rewini","doi":"10.1109/88.218176","DOIUrl":"https://doi.org/10.1109/88.218176","url":null,"abstract":"Parallax, a scheduling tool that incorporates seven traditional and nontraditional scheduling heuristics and lets developers compare their performance for real applications on real parallel machines, is discussed. Of the seven heuristics, two simple ones consider only task execution time, two consider both task execution and message-passing delay times, two use task duplication to reduce communication delay, and one considers communication delays, task execution time, and target machine characteristics such as interconnection network topology and overhead due to message-passing and process creation. Two examples of parallel applications of Parallax are described.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131681087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Agha, Svend Frølund, Wooyoung Kim, R. Panwar, A. Patterson, D. Sturman
{"title":"Abstraction and modularity mechanisms for concurrent computing","authors":"G. Agha, Svend Frølund, Wooyoung Kim, R. Panwar, A. Patterson, D. Sturman","doi":"10.1109/88.218170","DOIUrl":"https://doi.org/10.1109/88.218170","url":null,"abstract":"The Actor model programming language concept, which provides basic building blocks for a wide variety of computational structures, is reviewed. The Actor model unifies objects and concurrency. Actors are autonomous, distributed, concurrently executing objects that can send each other messages asynchronously. The Actor model's communication abstractions and object-oriented design are discussed. Three mechanisms for developing modular and reusable components for concurrent systems are also discussed. The mechanism are synchronizers, modular specifications of resource management policies, and protocol customization of dependability.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130612303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementing concurrent object-oriented languages on multicomputers","authors":"A. Yonezawa, S. Matsuoka, M. Yasugi, K. Taura","doi":"10.1109/88.218175","DOIUrl":"https://doi.org/10.1109/88.218175","url":null,"abstract":"The implementations of ABCL (an object-based concurrent language) on two different types of multicomputers-Electrotechnical Laboratories' EM-4 extended dataflow computer, and Fujitsu's experimental AP1000-are described. ABCL/EM-4 takes advantage of that machine's packet-driven architecture to achieve very good preliminary performance results. The AP1000 does not have special hardware support for message passing, so ABCL/AP1000 includes several software technologies that are general enough for conventional parallel or concurrent languages, again yielding promising performance. It is concluded that the results demonstrate the viability of attaining good performance with concurrent object-oriented languages on current multicomputers, whether experimental or commercial.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130507110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ROME: distributing C++ object systems","authors":"S. Burleigh","doi":"10.1109/88.218172","DOIUrl":"https://doi.org/10.1109/88.218172","url":null,"abstract":"The remote objects message exchange (ROME), which provides C++ programmers with a simple, highly portable, immediately usable mechanism for distributing application objects across an arbitrary collection of processors, is discussed. ROME comprises a protocol for communication among C++ objects, a programming infrastructure that implements this protocol, and an application programming interface (API) to that infrastructure.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121232251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The concurrent supercomputing consortium: Year 1","authors":"P. Messina","doi":"10.1109/88.219855","DOIUrl":"https://doi.org/10.1109/88.219855","url":null,"abstract":"Since 1991, the California Institute of Technology has operated a massively parallel computer system on behalf of the concurrent supercomputing consortium (CSCC). The computer system is a distributed-memory multiple-instruction multiple-data (MIMD) system the nodes of which are connected in a two-dimensional mesh by mesh-routing chips. The system's file server, portability, acceptance tests, mode of operation, and national network connections are described. The hardware and software issues of the Delta system are discussed.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126428225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wrestling the future from the past: the transition to parallel computing","authors":"W. Hillis","doi":"10.1109/88.219854","DOIUrl":"https://doi.org/10.1109/88.219854","url":null,"abstract":"The author presents his view on the future of distributed and parallel computing. He touches upon the topics of computational theory, computer languages, operating systems, databases, architecture, and applications.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NAS parallel benchmark results","authors":"D. Bailey, E. Barszcz, L. Dagum, H. Simon","doi":"10.1109/88.219861","DOIUrl":"https://doi.org/10.1109/88.219861","url":null,"abstract":"Benchmark results for the Numerical Aerodynamic Simulation (NAS) Program at NASA Ames Research Center, which is dedicated to advancing the science of computational aerodynamics are presented. The benchmark performance results are for the Y-MP, Y-MO EL, and C-90 systems from Cray Research; the TC2000 from Bolt Baranek and Newman; the Gamma iPSC/860 from Intel; the CM-2, CM-200, and CM-5 from Thinking Machines; the CS-1 from Meiko Scientific; the MP-1 and MP-2 from MasPar Computer; and the KSR-1 from Kendall Square Research. The results for the MP-1 and -2, the KSR-1, and the CM-5 have not been published before. Many of the other results are improved from previous listings, reflecting improvements both in compilers and in implementations.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121119378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}