{"title":"用MPI绘制数值流动模拟","authors":"J. Stone, M. Underwood","doi":"10.1109/MPIDC.1996.534105","DOIUrl":null,"url":null,"abstract":"Results from a parallel computational fluid dynamics (CFD) code combined with a ray tracing library for run-time visualization are presented. Several factors make in-place rendering of CFD data preferable to the use of external rendering packages or dedicated graphics workstations. In-place rendering avoids significant I/O to disks or to networked graphics workstations, and provides the ability to monitor simulations as they progress. The use of MPI (Message Passing Interface) in both codes helped facilitate their combination into a single application. Also due to the use of MPI, the two separate applications have been run on several different architectures. The parallel architectures include networks of workstations, the Intel iPSC/860, the Intel Paragon, and the IBM SP2.","PeriodicalId":432081,"journal":{"name":"Proceedings. Second MPI Developer's Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Rendering of numerical flow simulations using MPI\",\"authors\":\"J. Stone, M. Underwood\",\"doi\":\"10.1109/MPIDC.1996.534105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Results from a parallel computational fluid dynamics (CFD) code combined with a ray tracing library for run-time visualization are presented. Several factors make in-place rendering of CFD data preferable to the use of external rendering packages or dedicated graphics workstations. In-place rendering avoids significant I/O to disks or to networked graphics workstations, and provides the ability to monitor simulations as they progress. The use of MPI (Message Passing Interface) in both codes helped facilitate their combination into a single application. Also due to the use of MPI, the two separate applications have been run on several different architectures. The parallel architectures include networks of workstations, the Intel iPSC/860, the Intel Paragon, and the IBM SP2.\",\"PeriodicalId\":432081,\"journal\":{\"name\":\"Proceedings. Second MPI Developer's Conference\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Second MPI Developer's Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MPIDC.1996.534105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Second MPI Developer's Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MPIDC.1996.534105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Results from a parallel computational fluid dynamics (CFD) code combined with a ray tracing library for run-time visualization are presented. Several factors make in-place rendering of CFD data preferable to the use of external rendering packages or dedicated graphics workstations. In-place rendering avoids significant I/O to disks or to networked graphics workstations, and provides the ability to monitor simulations as they progress. The use of MPI (Message Passing Interface) in both codes helped facilitate their combination into a single application. Also due to the use of MPI, the two separate applications have been run on several different architectures. The parallel architectures include networks of workstations, the Intel iPSC/860, the Intel Paragon, and the IBM SP2.