{"title":"Application of Multi-block Grid and Parallelization Techniques in Hydrodynamic Modelling","authors":"P. Luong, R. Chapman","doi":"10.1109/HPCMP-UGC.2009.78","DOIUrl":null,"url":null,"abstract":"The Curvilinear Hydrodynamic 3-D (CH3D-WES) model is routinely applied in three-dimensional (3D) hydrodynamic and water quality modeling studies at the Engineering Research and Development Center (ERDC), Mississippi. Recent model improvements include the implementation of multiple grain size class sediment transport, grid wetting/drying, spatially and temporally varying wind and wave radiation stress gradient forcing. The practical application of the original single-block version of CH3D, which include the aforementioned model improvements have been limited to small computational domains and short simulation time periods, due to long computational processing time as well as large memory requirements. Critical to elimination of these restrictions was the implementation of data decomposition and Message Passing Interface (MPI), or a multi-block grid capability. The advantages of the multi-block grid parallel version of CH3D include: 1) the flexibility of site specific horizontal and vertical grid resolution assigned to each grid block, 2) block specific application of the sediment transport, wave radiation stress gradient forcing and computational cell wetting/drying model options, and 3) reduced memory and computational time requirements allowing larger computational domains and longer simulation time periods. To demonstrate the advantages of the multiblock capability, hydrodynamic and salinity transport simulations were performed utilizing the existing Mississippi Sound and Berwick Bay computational domains. A comparison of single block and multi-block predictions of salinity time series is presented. CPU wallclock times and load balancing between single-grid and multi-block grid on several high performance computer systems is discussed.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 DoD High Performance Computing Modernization Program Users Group Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCMP-UGC.2009.78","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The Curvilinear Hydrodynamic 3-D (CH3D-WES) model is routinely applied in three-dimensional (3D) hydrodynamic and water quality modeling studies at the Engineering Research and Development Center (ERDC), Mississippi. Recent model improvements include the implementation of multiple grain size class sediment transport, grid wetting/drying, spatially and temporally varying wind and wave radiation stress gradient forcing. The practical application of the original single-block version of CH3D, which include the aforementioned model improvements have been limited to small computational domains and short simulation time periods, due to long computational processing time as well as large memory requirements. Critical to elimination of these restrictions was the implementation of data decomposition and Message Passing Interface (MPI), or a multi-block grid capability. The advantages of the multi-block grid parallel version of CH3D include: 1) the flexibility of site specific horizontal and vertical grid resolution assigned to each grid block, 2) block specific application of the sediment transport, wave radiation stress gradient forcing and computational cell wetting/drying model options, and 3) reduced memory and computational time requirements allowing larger computational domains and longer simulation time periods. To demonstrate the advantages of the multiblock capability, hydrodynamic and salinity transport simulations were performed utilizing the existing Mississippi Sound and Berwick Bay computational domains. A comparison of single block and multi-block predictions of salinity time series is presented. CPU wallclock times and load balancing between single-grid and multi-block grid on several high performance computer systems is discussed.