{"title":"Collective buffering: Improving parallel I/O performance","authors":"B. Nitzberg, V. Lo","doi":"10.1109/HPDC.1997.622371","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622371","url":null,"abstract":"\"Parallel I/O\" is the support of a single parallel application run on many nodes; application data is distributed among the nodes, and is read or written to a single logical file, itself spread across nodes and disks. Parallel I/O is a mapping problem from the data layout in node memory to the file layout on disks. Since the mapping can be quite complicated and involve significant data movement, optimizing the mapping is critical for performance. We discuss our general model of the problem, describe four Collective Buffering algorithms we designed, and report experiments testing their performance on an Intel Paragon and an IBM SP2 both housed at NASA Ames Research Center. Our experiments show improvements of up to two order of magnitude over standard techniques and the potential to deliver peak performance with minimal hardware support.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"26 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133882489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel hierarchical global illumination","authors":"Q. Snell, J. Gustafson","doi":"10.1109/HPDC.1997.622358","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622358","url":null,"abstract":"This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like \"progressive radiosity\" methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for \"ray histories.\".","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124224832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel programming in multi-paradigm clusters","authors":"J. Leichtl, Phyllis E. Crandall, M. Clement","doi":"10.1109/HPDC.1997.626438","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626438","url":null,"abstract":"An important development in cluster computing is the availability of multiprocessor workstations. These are able to provide additional computational power to the cluster without increasing network overhead and allow multiparadigm parallelism, which we define to be the simultaneous application of both distributed and shared memory parallel processing techniques to a single problem. In this paper we compare execution times and speedup of parallel programs written in a pure message-passing paradigm with those that combine message passing and shared-memory primitives in the same application. We consider three basic applications that are common building blocks for many scientific and engineering problems: numerical integration, matrix multiplication and Jacobi iteration. Our results indicate that the added complexity of combining shared- and distributed-memory programming methods in the same program does not contribute sufficiently to performance to justify the added programming complexity.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132003640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A bus-efficient low-latency network interface for the PDSS multicomputer","authors":"C. Steele, J. Draper, J. Koller, C. LaCour","doi":"10.1109/HPDC.1997.626407","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626407","url":null,"abstract":"The Packaging-Driven Scalable Systems multicomputer (PDSS) project uses several innovative interconnect and routing techniques to construct a low-latency, high-bandwidth (1.3 GB/s) multicomputer network. The PDSS network interface provides a low-latency interface between the network and the processing nodes that allows unprivileged code to initiate network operations while maintaining a high level of protection. The interface design exploits processor-bus cache coherence protocols to deliver very-low-latency cache-to-cache communications between processing nodes. Network operations include a variety of transfers of cache-line-sized packets, including remote read and write, and a distributed barrier-synchronization mechanism. Despite performance-limiting flaws, the initial single-chip implementation of the network router and interface achieves gigabit/s bandwidth and microsecond cache-to-cache latencies between nodes using commodity processor and memory components.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126924571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}