IEEE Parallel & Distributed Technology: Systems & Applications最新文献

筛选
英文 中文
High performance Fortran 高性能Fortran
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1993-02-01 DOI: 10.1109/88.219857
D. Loveman
{"title":"High performance Fortran","authors":"D. Loveman","doi":"10.1109/88.219857","DOIUrl":"https://doi.org/10.1109/88.219857","url":null,"abstract":"Fortran-90, its basis in Fortran-77, its implications for parallel machines, and the extensions developed for it by the High Performance Fortran Forum (HPFF), a coalition of computer vendors, government laboratories, and academic groups founded in 1992 to improve the performance and usability of Fortran-90 for computationally intensive applications on a wide variety of machines, including massively parallel single-instruction multiple-data (SIMD) and multiple-instruction multiple-data (MIMD) systems and vector processors, are discussed. SIMD and MIMD systems, previous attempts to develop languages for them, the genesis of the HPFF, how the group actually worked, and the HPF programming model are described.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122782813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 159
A glossary of parallel computing terminology 并行计算术语的词汇表
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1993-02-01 DOI: 10.1109/88.219862
G. V. Wilson
{"title":"A glossary of parallel computing terminology","authors":"G. V. Wilson","doi":"10.1109/88.219862","DOIUrl":"https://doi.org/10.1109/88.219862","url":null,"abstract":"Terms associated with parallel and distributed computing technology are defined.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122373804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Linear algebra libraries for high-performance computers: a personal perspective 用于高性能计算机的线性代数库:个人视角
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1993-02-01 DOI: 10.1109/88.219856
J. Dongarra
{"title":"Linear algebra libraries for high-performance computers: a personal perspective","authors":"J. Dongarra","doi":"10.1109/88.219856","DOIUrl":"https://doi.org/10.1109/88.219856","url":null,"abstract":"Linpack software, which was released in 1979, for solving linear algebra problems on high-performance computers is reviewed. The Linpack benchmark and standards development are discussed. Lapack, a linear algebra library that embodies ideas of locality of reference and data reuse, is described. The algorithms design in Lapack and the advantages and future developments of Lapack are also discussed.<<ETX>>","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125855579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Why there won't be apps: The problem with MPPs 为什么没有应用程序:mpp的问题
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/M-PDT.1994.329785
G. Bell
{"title":"Why there won't be apps: The problem with MPPs","authors":"G. Bell","doi":"10.1109/M-PDT.1994.329785","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329785","url":null,"abstract":"Gordon Bell gbellQmojave In spite of many years of research, massively funded, massively parallel (AKA “scalable”) computers aren’t yet successful. Nor are they likely to be unless they undergo a massive transformation to leverage developments in the mainstream computer and communications industries. T h e latest threat comes from standard workstations and fast, low-latency networks based on ATM. Like MPPs, these networks offer size scalability (from fewer to more processors), but they also offer geizeration scalability (from previous to future generations) and space scalability (from multiple nodes in a box, to computers in multiple rooms, buildings, or geographic regions). Furthermore, these networks offer a critical capability that MPPs sorely lack: application compatibility with workstations and multiprocessor servers. T h e meager existence to date of special-purpose MPPs stems from four factors: stanford.edu","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115270848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enterprise wide offers secure remote client/server access [New Products] 企业范围提供安全的远程客户端/服务器访问[新产品]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/m-pdt.1995.414851
Dennis Taylor
{"title":"Enterprise wide offers secure remote client/server access [New Products]","authors":"Dennis Taylor","doi":"10.1109/m-pdt.1995.414851","DOIUrl":"https://doi.org/10.1109/m-pdt.1995.414851","url":null,"abstract":"Techsmith’s Enterprise Wide allows remote deployment and use of distributed dat‘ and services in a cliendserver environment. Enterprise Wide consists of software for both the remote client and the LAN gateway, as well as an intelligent communications adapter card for the LAN gateway PC. Dial-up telephone lines and standard modems connect remote workstations to the network. Version 2.6 offers two new security options and enhanced third-party security support, and will also support Windows 95 as a remote-access client. The product integrates into an existing security scheme, or it can provide remote-access security for organizations that have not yet adopted a system. If a company’s security is based on Novell’s NetWare, Enterprise Wide allows remote-access security integration, using the security features of Novell NetWare 3.x and 4.x in bindery emulation mode. This features reduces training time for users, according the Techsmith. Users access their networks as they do when they are directly on the LAN. Also, system administrators do not have to maintain separate databases for remote access. This security system can validate Enterprise Wide 2.5 workstations, or any workstation running a third-party Point-to-point Protocol stack that supports Password Authentication Protocol authorization. This lets mixed-protocol environments exploit Novell’s security services. Enterprise Wide’s Security System lets companies without a LAN-based security system incorporate one for remote access. System administrators can designate particular phone numbers to call back to enforce secure locations, and to consolidate phone bills. T h e Security System runs on a separate system under Windows 3.x, Windows N T , or Windows 95. It features password protection for Enterprise Wide user database; the ability to add, delete, or disable users (disabling prevents logins, but does not remove a user from the password file); global administration of user ID and password files for all Enterprise Wide gateways installed at the site; and userdatabase file encryption to protect against browsing Enterprise Wide 2.6 supports security technologies using Terminal Access Controller Access Control Systems such as Security Dynamics’ SecurID and ACS/Server and Enigma Logic’s SafeWord. Enterprise n7ide uses an intelligent software agent, ProtocolAssist, to optimize the conversation over the remote link. Protocolhsist applies intelligence a t the remote workstation and the LAN gateway to allow more processing to be performed at full network speed instead of over the slower remote link, claims the company. It avoids message and response delays by having the gateway acknowledge each data required by the remote client to pass through. Enterprise Wide 2.6 will be available during the third quarter of 1995. Prices, including unlimited workstation distribution and four concurrent asynchronous connections, start at $2,495. Circle reader service number 21","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121040055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel applications: The next frontier for computer indus 并行应用:计算机行业的下一个前沿
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/M-PDT.1994.329788
Irving Wladawsky-Berger
{"title":"Parallel applications: The next frontier for computer indus","authors":"Irving Wladawsky-Berger","doi":"10.1109/M-PDT.1994.329788","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329788","url":null,"abstract":"I believe parallel computing represents a revolution on par with the development of the personal computer. T h e PC brought power to the people in their offices, their homes, their schools, and even their cars. Similarly, parallel coniputers will bring the power of the largest computers and their applications to many people. Parallel computers will speed progress in scientific and medical research, allow manufacturers to build all kinds of new products, offer new services along the information highways, and foster more effective education. When our industry was emerging, we needed lots of debate about the differences in designs of parallel computers, and I have a great deal of respect for the innovations the very smart hardware designers produced. Thanks to their efforts, we now have a variety of parallel products that elegantly lash together multiple processors to create computing power that scales almost beyond the imagination. However, I believe computer applications and not computer architecture -will ultimately drive the market for parallel computing. Now that parallel processing is maturing, people will buy our machines in much the same way they buy cars. While certain auto enthusiasts and race-car drivers might he greatly interested in automotive breakthroughs a new engine design, for example most of us are less concerned with innovation under the hood than we are with how fast and how far -the car will take us. Likewise, those of us who qualify as computer “nerds” may be fascinated with the latency and bandwidth of the latest switch design. But our potential customers would much rather hear about how fast the machine will nin their critical applications or how Far ahcad of the competition the machine will take them. Of course, car buyers and computer buyers alike want to balance the raw performance of their new machines against their cost. Figure 1. Purine Nucleoside Phosphorylase is an enzyme important in t-cell immunity. The structure shown here is the human form solved by Steven Ealick and his coworkers at Cornell. Based on X-ray structures of several molecules thought to be similar to intermediate stages of the reaction, the Ealick group wants to reconstruct and animate the enzyme reaction path. Courtesy Steven Ealick and his coworkers (Cornell University), and Richard Gillilan (Cornell Theory Center) for the scientific visualization.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122562320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Prepare HPF Programming Environment 准备HPF编程环境
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/M-PDT.1994.329808
A. Veen
{"title":"The Prepare HPF Programming Environment","authors":"A. Veen","doi":"10.1109/M-PDT.1994.329808","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329808","url":null,"abstract":"The European Prepare consortium has constructed an integrated programming environment to develop, analyze, and restructure HPF programs. The consortium consists of three industrial and six academic partners and is coordinated by ACE, Europe’s leading compiler manufacturer. It represents most of Europe’s expertise in automatic parallelization for distributed-memory computers, making directly available, for instance, the experience gained during the development of the Vienna Fortran Compilation System. The Prepare environment is based on three tightly integrated components. A parallelization engine transforms the source program’s original data-parallel form into SPMD form. An interactive engine reports to the programmer the extent to which the system can parallelize the program, indicates the obstacles preventing parallelization, facilitates the removal of such obstacles, and provides performance measures. A compilation system generates highly optimized code that fully exploits the target platform’s intraprocessor parallelism. The Prepare project’s unique strength is the tight integration of these components. The interactive engine can access the internal representation of the compiler. The compiler and the parallelization engine use each other’s analysis information and mutually influence each other’s optimization decisions. This integration brings several advantages to the user. Interaction is much more natural, because the communication between the user and the system is always in terms of the original source program. The user does not have to be aware of the elaborate transformations performed by the compiler. Performance is much better, because the parallelizer, vectorizer, optimizer, and code generator all cooperate (rather than compete) to exploit the many performance-enhancing features that high-end massively parallel platforms provide. This is crucial because of the often complicated interaction between these features. Without special tools, this high level of integration is not compatible with the strong modularization required for software as complex as a parallelizing compiler. We adopted the Cosy compilation system developed in the Compare project. In Cosy, a large set of engines (concurrent tasks that each perform one algorithm) access a shared internal representation of the program, gradually transforming it and enriching it with analysis information. Compiling phases do not have to be ordered linearly, which is a great advantage for a compiler that combines vectorization, parallelization, and sophisticated optimizations. Another advantage is that on a (shared-memory) parallel host the engines work in parallel. We have found that the HPF subset is well designed, except for some loose ends concerning subprogram interfaces and the relation between multiple PROCESSOR directives. We question the usefulness of explicit dynamic distributions. T o our surprise, much of the complexity of compiling HPF stems from its Fortran 90 base. For inst","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Why MPPs? 为什么差?
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/M-PDT.1994.329786
J. Cownie
{"title":"Why MPPs?","authors":"J. Cownie","doi":"10.1109/M-PDT.1994.329786","DOIUrl":"https://doi.org/10.1109/M-PDT.1994.329786","url":null,"abstract":"Mainframes are expensive. Workstations are cheap. PCs are cheaper. Conclusion: All computing should be done on PCs (or, at a pinch, workstations). This is, of course, a naive conclusion. Even ignoring the Grand Challenges and other problems that demand high performance, there are many other reasons why this conclusion is questionable, and more particularly why the implicit corollary that all computing is distributed (because that’s where the PCs and workstations are) is also wrong. The most important of these are data security, data visibility, performance, and use.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121115860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The testability of distributed real-time systems [Book Reviews] 分布式实时系统的可测试性[书评]
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1109/M-PDT.1995.414847
J. Zalewski
{"title":"The testability of distributed real-time systems [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1995.414847","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414847","url":null,"abstract":"of Distributed Real-Time Systems by Werner Schutz 144 pages 874 Kluwer Academic Publishers, Boston 1993 ISBN 0-7923-9386-4 Dewire treats analysis and top-level design in one chapter, as is appropriate, deferring the discussion of detailed design to a separate chapter on “Construction.” I find it difficult to understand, however, why the integration issues such as transaction management ended up in a chapter under the heading “Operations” and not in the chapter on detailed design. Also, the book discusses numerous techniques, methods, and tools without providing any references. I think this is a serious omission in a survey that covers such an important and relatively new area of computing. Even though Dewire’s text does not give an adequate description of the newest developments in distributed computing, it does provide the reader with an exhaustive overview of major issues, methods, and tools. It is informative and well written. I would not hesitate to recommend it as a practical source of technical information on modern distributed systems.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125380932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding Petri nets 理解Petri网
IEEE Parallel & Distributed Technology: Systems & Applications Pub Date : 1900-01-01 DOI: 10.1007/978-3-642-33278-4
W. Reisig
{"title":"Understanding Petri nets","authors":"W. Reisig","doi":"10.1007/978-3-642-33278-4","DOIUrl":"https://doi.org/10.1007/978-3-642-33278-4","url":null,"abstract":"","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127128677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 314
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信