{"title":"Structured Parallel Programming on Multicomputers","authors":"Zhiwei Xu","doi":"10.1109/DMCC.1991.633130","DOIUrl":null,"url":null,"abstract":"Currently, parallel programs for distributed memory multicomputers are difficult to write, understand, test, and reason about. It is observed that these difficulties can be attributed to the lack of a structured style in current parallel programming practice. In this paper, we present a structured methodology to facilitate parallel program development on distributed memory multicomputers. The methodology aims to developing parallel programs that are determinate (the same input always produces the same output, in other words, the result is repeatable), terminating (the program is free of deadlock and other infinite waiting anomalies), and easy to understand and test. It also enables us to take advantage of the conventional, well established techniques of sofhvare engineering. ming to parallel program development. However, some new ideas are added to handle parallelism. The methodology contains three basic principles: (1) Use structured constructs; (2) develop determinate and terminating programs; (3) follow a two-phase design; (4) use a mathematical model to define semantics of parallel programs; and (5) employ computer aided techniques for analyzing and checking programs. Our basic approach is to combine these principles to cope with the complexity of parallel programming. As shown in Fig.1, while the total space of all parallel programs is very large, applying the first three principles drastically reduces the space to a subspace (Class IV). Since this subspace is much smaller, the programming task becomes simpler.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"26 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DMCC.1991.633130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Currently, parallel programs for distributed memory multicomputers are difficult to write, understand, test, and reason about. It is observed that these difficulties can be attributed to the lack of a structured style in current parallel programming practice. In this paper, we present a structured methodology to facilitate parallel program development on distributed memory multicomputers. The methodology aims to developing parallel programs that are determinate (the same input always produces the same output, in other words, the result is repeatable), terminating (the program is free of deadlock and other infinite waiting anomalies), and easy to understand and test. It also enables us to take advantage of the conventional, well established techniques of sofhvare engineering. ming to parallel program development. However, some new ideas are added to handle parallelism. The methodology contains three basic principles: (1) Use structured constructs; (2) develop determinate and terminating programs; (3) follow a two-phase design; (4) use a mathematical model to define semantics of parallel programs; and (5) employ computer aided techniques for analyzing and checking programs. Our basic approach is to combine these principles to cope with the complexity of parallel programming. As shown in Fig.1, while the total space of all parallel programs is very large, applying the first three principles drastically reduces the space to a subspace (Class IV). Since this subspace is much smaller, the programming task becomes simpler.