{"title":"Application development for distributed environments [Book Reviews]","authors":"M. Machura","doi":"10.1109/M-PDT.1995.414846","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414846","url":null,"abstract":"This is the second book in the James Martin/ McGraw-Hill Productivity Series aimed a t information systems professionals and managers. Incidentally , the first book was written by the same author and is called Client/Servei-Computing. The series focuses on current computing technologies in an attempt to meet new challenges that modern organizations face. I was attracted to the book by its title. With hindsight, however, I think that a more appropriate title would be \" Dezielopment Issues and Tools in Distributed Systewts. \" T h e book provides a comprehensive picture of all the major elements of distributed systems as of early 1993. The author takes a pragmatic approach by concentrating on prevailing technologies, such as relational databases, structured design methods, cliendserver architectures, 4GLs and GUI builders. Dewire also pays due attention to the available standards. Roughly two thirds of the book contains a general discussion of distributed systems; the remaining one third surveys various development tools. Part 1 presents the basic concepts, application development strategies, and components of distributed systems. Part 2 deals with analysis and top-level design, and Part 3 covers the construction of distributed systems (detailed design and implementation). Part 4, called \" Operations , \" contains a chapter on integration that surveys the important issues of transaction management , , .work management, and distributed computing environments. This section also has a chapter on production that discusses configuration and version control, sharing data, monitoring networks, and security. Part 5 presents commercial application development products for distributed systems: 4GLs, cliendserver tools, and CASE tools. The concluding chapter discusses future trends. Application Development far Distributed Envi-m w \" s stresses the importance of distributed, enterprise-wide information technology solutions in modern organizations that need to quickly respond to market changes and modify their business processes. Dewire estimates that 20% of the current distributed applications are mission-critical systems such as transaction-based operational 1%. The remaining 80% are less critical systems, such as information and decision support systems. CASE tools service the first category ; cliendserver development tools and 4GLs service the latter. As the technology matures, the cliendserver tools and CASE tools will merge, and 4GLs will evolve into flexible and efficient tools for cliendserver applications. As I mentioned earlier, the book covers the established distributed computing technologies. Dewire gives a rather careful, though insufficient, treatment to the emerging technologies such as distributed object computing. In fact, Dewire refrains from endorsing …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129484003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applied Parallel Research's xHPF system","authors":"J. Levesque","doi":"10.1109/M-PDT.1994.329805","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329805","url":null,"abstract":"Applied Parallel Research takes a somewhat different approach to High Performance Fortran than do other implementors. APR feels the real power of HPF is in its comment line directives by which the user can drive an automatic parallelization system. Rather than treating HPF as an altemative to automatic parallelization, we believe that it can be a powerful aid for automatic parallelization of existing Fortran 77 programs. W e have arrived at this point of view after a considerable effort to provide source-code global analyzers and parallelizers with extensive capabilities for large, real-world, sequential Fortran 77 programs. For example, our xHPF system will parallelize very complex Fortran 77 DO loops rather than relying on the user to explicitly expose parallel operations by translating to Fortran 90 array syntax. HPF’s data-distribution directives let us provide batch automatic parallelization tools, such as xHPF, in contrast to our interactive Forge 90 Distributed-Memory Parallelizer, which requires the user to explicitly direct the data decomposition of the arrays in the program. xHPF also accepts Fortran 90 array syntax and extends HPF data-distribution rules. APRs approach has been to provide HPF compilation systems that let users more easily port existing sequential Fortran 77 programs to MPP systems. APR feels that the market for tools that port existing Fortran programs to MPP systems far exceeds the market for tools to develop parallel programs from scratch.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130208858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PGHPF from The Portland Group","authors":"V. Schuster","doi":"10.1109/M-PDT.1994.329807","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329807","url":null,"abstract":"PGHPF, The Portland Group’s HPF compiler, is now available for general distribution. Its initial release fully supports the HPF subset as defined in version 1 .O of the H P F Language Specification. A March 1995 release will support the full HPF language. PGHPF is available in two forms. A highly tuned version is integrated with PGI’s PGF77 Fortran compiler and produces executable images for most 8 6 0 and Sparc multiprocessor platforms. In this form, PGHPF will be the standard HPF compiler provided on the Intel Paragon and Meiko CS-2 scalable parallel processing systems. It will also be optimized for other 8 6 0 and SuperSparc sharedmemory multiprocessor systems. PGHPF is also available as a source-to-source translator that produces Fortran 77, incorporating calls to a portable communications library. This output, with linearized array references and de facto standard Cray pointer variable declarations, can then be used as input to standard node compilers. Both forms of the compiler use an internally defined transport-independent runtime library. This allows common source generation regardless of the target or the underlying communication mechanism (MPI, PVM, Parmacs, NX, or a targetcustom communication protocol). The runtime library for a specified target can thus be optimized outside the context of the compiler. PGI is developing optimized versions of the runtime library for the Intel Paragon, Meiko CS-2, SGI MP Challenge, SuperSparc workstation clusters, and Solaris shared-memory systems. Interfaces to PGHPF, including the runtime interface, will be open and freely available. This will let system vendors and researchers custom-tune for a specific target, and will facilitate integration with existing parallel support tools. The success of HPF as a standard depends on whether programmers can use it to implement efficient, portable versions of appropriate data-parallel applications. Based on that assumption, the highest priority for the initial release of PGHPF is completeness, correctness, and source portability. The initial release of PGHPF supports all of the HPF subset and will distribute and align data exactly as the programmer specifies, in as many dimensions as desired. Control parallelism will be exploited wherever possible as dictated by data distributions and language elements. PGI is spending significant effort to minimize the inefficiencies and overhead introduced to support the HPF paradigm. From a performance standpoint, minimization and efficiency of communication are most important. PGHPF incorporates optimizations that address both structured and unstructured communication. It can identify and exploit a program’s inherent structure through calls to structured asynchronous communication primitives. Examples of such primitives include collective shifts, the various forms of broadcast, and data reductions. Exploiting an application’s structure increases efficiency and performance portability. The asynchronous nature of the primitiv","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116943150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alpha and oracle serve up very large memory [New Products]","authors":"","doi":"10.1109/m-pdt.1995.414853","DOIUrl":"https://doi.org/10.1109/m-pdt.1995.414853","url":null,"abstract":"The Alphaserver 8400 enterprise server and Alphaserver 8200 departmental server use the 300-MHz Alpha 21164 chip, which can operate a t a billion instructions per second, according to Digital. The servers combine Alpha 64-bit architecture and very large memory capacity (up to 14 Gbytes). They offer a choice of PCI, XMI, and Futurebus+ buses. The AlphaServers have reliability and availability features such as OpenVMS clusters, hot swap disks, RAID, redundant power, ECC memory and data paths, fault management, and uninterruptible power system. They are available with Digital Unix or Open VMS operating systems. Digital also plans support for Windows NT. The Alphaserver 8200 features one to six processors and up to 6 Gbytes of memory. The base system costs $100,000. It includes one processor; power and packaging with a five-slot system bus for CPU, memory, and VO modules; 12 8 Mbytes of memory; an integrated VO module with SCSI and communication ports; a CDROM reader; and the OpenVMS or Digital Unix operating system. The Alphaserver 8400 features one to 12 processors and up to 14 Gbytes of memory. The base system, priced at $195,000, has the same basic configuration as the Alphaserver 8200 base system, but offers more expansion for additional CPU, memory, and VO connectivity, and twice the memory. T o support the AlphaServers, Oracle offers a very large memory option for its Oracle7 database. This option exploits the 64-bit Alpha architecture, Digital Unix, and the new server’s 14-Gbyte maximum main memory to allow a larger portion of the database to reside in memory. The option features two components: Large Systems Global Areas and Big Oracle Blocks. LSGAs are database buffer caches in excess of 2 Gbytes. According to Oracle, the LSGA is transparent to most applications, and application code does not have to be changed. BOBS support block sizes up to 32 Kbytes. Larger blocks allow more rows per block, meaning less overhead per row and fewer disk I/O requests when scanning tables, claims the company. Consequently, the database can move data from disk to memory and back much faster. Circle reader service number 23","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Portability and performance for parallel processing [Book Reviews]","authors":"M. Paprzycki","doi":"10.1109/M-PDT.1995.414848","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414848","url":null,"abstract":"Portability and Performance for Parallel Processing edited by Tony Hey and Jeanne Ferrante 272 pages $49.95 John Wiley &Sons, Chichester, UK 1994 ISBN 0-47 1-94246-4 retrieving data due to polling. This cost is negligible if the system is close to periodic, a somewhat uncommon situation in distributed environments. Schiitz then introduces the concept of testing a distributed system. He presents three ways to do a cluster test, where a cluster is a set of nodes forming a part of a distributed system:","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133895837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Massively parallel artificial intelligence [Review]","authors":"B. Mikolajczak","doi":"10.1109/M-PDT.1995.414836","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414836","url":null,"abstract":"collection falters due to its lack of organization. Even though each paper addresses an important point related to efficient portable parallel computing and is worth reading in its own right, the collection remains just an assembly dispersed around a common subject. This is especially true for the last two chapters. In addition, the book fails to address one of the most popular (if not necessarily the best) attempts at providing software support for efficient portable parallel computing-the Parallel Virtual Machine (PVM) project from Oak Ridge National Laboratory. Nor is there any discussion of the current research from the High Performance Fortran (HPF) and MPI projects. Although these projects were not conference subjects , they are consequential and merit discussion. Having said all this, I must emphasize that this text is important; it explores one of the areas that are crucial for the success of parallel computing. When the editors prepared the book no one could foresee that a number of parallel computer vendors would go out of business or that a wave of strong criticism would be raised against the High Performance Computing Research Program. It is clear now that these occurrences grew out the continued lack of development environments for efficient portable programs, which raised doubts about the endeavor's commercial viability. In summary, this book will be of definite interest to anyone who has professional interest in parallel computing: computer scientists as well as engineers. It is a valuable resource that will introduce them to a variety of issues related to achieving efficient portable parallel computing. Each chapter contains an appropriate number of references that should allow further investigation. At the same time, the book certainly does not aspire to provide a complete overview of the field or give definitive answers. Since most of the papers require an overall understanding of parallel computing (some chapters go into considerable detail) the book is not pamcularly suitable as a textbook. However, this collection can function as a source of individual articles for use in the classroom or self-instruction. This collection comprises 12 papers devoted to different aspects of artificial intelligence as perceived , motivated, and applied by recent progress in massively parallel computer technology. The first paper by Kitano sets the stage for the following presentations, as it gives an overview of potential and real applications of massively parallel processing in artificial intelligence. The remaining papers are devoted to the following …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130321220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pacific Sierra's VAST-HPF and VAST/77toHPF","authors":"J. Vanderlip","doi":"10.1109/M-PDT.1994.329809","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329809","url":null,"abstract":"WHAT CLASS OF HPF PROGRAMS WILL PERFORM WELL? To perform well, HPF programs must spend almost all their time in sections of code that can be partitioned across the processors. They also must access data that resides on the local processor almost all the time, and send or receive data from other processors very infrequently. This means that an HPF program should spend its time almost completely in array syntax or loops that can be performed in parallel, and should be written so that references to arrays in loops are aligned and distributed in the same way. VAST-HPF performs well on shifted sections. Real programs often use sections of arrays that are offset in one or more dimensions. A common construct in grid-based computations is the use of slightly shifted sections of arrays in nearestneighbor computations. For blockdistributed arrays, this means that communication is needed at the boundaries of the blocks. VAST-HPF makes the local distribution of such arrays slightly larger so that the edge values can be communicated into this expanded region. It enhances data locality by passing messages only for the elements at the edge of the offset section. VAST-HPF also performs well on reductions. Reduction operations, such as the summation of array elements, occur frequently in real programs. VAST-HPF handles reductions by reducing the distributed part of the array calculation on each processor, passing the partial reductions to a single processor for the final reduction, and then broadcasting the final result to all processors.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UI-based design and development for client/server applications [Book Reviews]","authors":"M. Trayton","doi":"10.1109/M-PDT.1995.414849","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414849","url":null,"abstract":"In this text the authors state that they use the top 4GL integrated development environment products “to illustrate the development of a large-scale client/server application example.” Beginning very enthusiastically, the authors talk about the book as if it is the outcome of a very successful project. The authors intend the audience to be mainframe professionals who need help translating their skills for client-server, object-oriented, and graphical user interface applications (I abbreviate these terms as CS/OO/GUI). The book starts by using abbreviations that only computer professionals with a mainframe background would recognize. As the book moves on, CS/OO/GUI jargon becomes commonplace. The authors should have included a glossary. As it is, the reader must continually review previous pages to find the meaning of acronyms and abbreviations. This is annoying and it becomes clear that the reader needs quite a lot of knowledge of CS/OO/GUI to understand what the authors are saying. The authors make the assumption that, for the foreseeable future, CS/oO/GUI is the way to go in data processing and that the important new skills involve the “new wave” of CS/OO/GUI 4GL workbenches, such as PowerBuilder, Visual Basic, SQLWindows, and PARTS Workbench. The book includes discussions on the pros and cons of corporate mainframes, centralized data processing, multiple parallel processor machines, midrange minicomputers, workstations, and personal computers (“The key to CS”). The text then goes on to explain the different operating systems related to each type of hardware platform and their possibilities for the future. The authors consider relational databases to be the “cornerstone” of CS computing. They suggest that an insight into computer communication is critical in understanding CS. There then follows a general overview of computer communications and the background to CS systems. After an overview of CS/OO/GUI (this can be skipped by the more enlightened reader), the authors build a sample business application with each of the four products. The example application that the authors use is small enough to be built in a short amount of time by one person. The book emphasizes the importance of good GUI design, recommends adhering to GUI standards, encourages the use of meaningful variable names, and stresses the importance of building on-line help into a system. The authors discuss the hardware and software requirements in considerable detail; this is useful for those thinking about buying one of the products mentioned in the book. When describing SQLWindows, the authors briefly mention project management, but only in relation to the facilities available for this in the product. The authors encourage interactive development, but unfortunately they approach development by prototyping and hacking the application together. The authors spend almost no time discussing the development of larger systems that require a project team. On first seeing the book, I t","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115259449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Architectures with parallel I/O subsystems","authors":"","doi":"10.1109/m-pdt.1995.414860","DOIUrl":"https://doi.org/10.1109/m-pdt.1995.414860","url":null,"abstract":"Here are some examples, in approximate chronological order, of massively parallel machines that include a parallel I/O subsystem: 0 Intel iPSC hypercubes: Each hyper-cube node has an extra link that allows an YO processor to hook onto it. Thus, the number of 1/0 processors can grow to the number of hypercube nodes. In the latest version (the iPSC/860), hypercube nodes are based on the i860 microprocessor , whereas 1/0 processors use an 803 86 chip. Each I/O processor has a SCSI bus with one or more disks, and services requests from all hypercube nodes. Requests and data are routed through the node to which the 1/0 processor connects. 0 nCube hypercubes: Like the iPSC, nodes have an e m connection to an YO processor. Each VO processor connects directly to up to eight nodes.' The processors use a proprietary chip design. MasPar: A SIMD machine with up to 16K processors.' A grid and a three-stage router network connect the processors. The router also connects to a special IORAM of up to 1 Gbyte. This allows permutation of the data between the processor array and the I O W. The I O W , in turn, connects to multiple disk arrays via an YO channel. Each disk array is a RAID 3 arrangement with eight data disks and one parity disk. Intel Paragon XP/S: A mesh-suuc-tured machine that allows different configurations of compute nodes and U 0 nodes. Compute nodes are based on the 8 6 0 microprocessor. Typically, the VO nodes are concentrated in one or more rectangular I/O partitions. The Paragon is based on experience with the Touchstone Delta prototype, a 16 x 36 mesh with 5 13 processing nodes and 42 VO nodes (32 with disks and 10 with tape^).^ kSR1: A multiprocessor based on the Allcache memory design, with up to 1,088 custom processors. Each processor can connect to an adapter for external communications. One of the options is the Multiple Channel Disk adapter, which has five SCSI controllers. Each node can have up to 20 disks attached to it, in increments of five. Software configuration allows nodes with VO devices to be used exclusively for VO, or also for computation. Thinking Machines CM-Y: A multi-computer based on a fat-tree network and Sparc nodes with optional vector units. I/O is provided by a scalable disk array, which is implemented as a separate partition of disk-storage nodes4 Each …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125416140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taraflops into laptops","authors":"S. Wallach","doi":"10.1109/M-PDT.1994.329787","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329787","url":null,"abstract":"BANDWIDTH W e need a t least 100 Mbyte/sec/node, which after the normal expansion for head-ers and ECC is around 1 Gbidsec of raw data on the link. This represents 22 T3 (44.736-Mbidsec) interfaces per node! LATENCY W e need an end-to-end latency through the switch network which is in line with the rest of the store hierarchy. If we look a t current processors, we see performance characteristics something like this for the different levels of the store hierarchy: Level Clocks Slowdown Register 1 Level 1 cache 2-3 2-3 Level 2 cache 6-10 2-3 Store 2 0+ 2-3 So each level down the hierarchy is a factor of 2 or 3 slower than the previous one. If we view store accessed over the switch as the next level of the memory hierarchy, this implies that we want to achieve an access through the switch in around 40-60 CPU cycles-that is, in 400-600 nanoseconds for a 1 00-MHz clocked C P U (probably a low estimate). ATiM is currently viewed as the lowest latency nonproprietary switch structure, but such switches have a single switch latency of around 1.25 sec; this implies a full switch network latency of around 4 Fsec for a 256-node machine, a factor of 10 too large. So far I have ignored the latency in getting from a user request out to the switch network. If the network is accessed as a communications device (as will happen with a naive ATM interface), this will involve system calls and the kernel of the operating system. Many thousands of instructions will be executed, translating Teraflops into laptops Stl?UP WUllUCh. COYlZE'X At a recent meeting of the High Performance Computing and Communications and Information Technology Subconi-mittee, the topic was software for scalable parallel processing. Various suppliers of hardware systems and software applications participated, including me. The consensus was that standard third-party software was beginning to emerge on scalable parallel processors, and that as a result, a new world of computing was coming. One participant went so far as to state that \" one day we will run parallelized finite element code on a laptop. \" I share the same view: Scalable parallel processing (SPP) will be the norm, and will pervade all computing from the laptop to the teraflop. For server systems costing $50,000 or more, parallel processors will be standard in the next year, with price erosion of 1 …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"405 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134474325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}