{"title":"使用数十亿个单元网格进行工业级反应流化床模拟的高性能计算挑战与机遇","authors":"Hervé Neau , Renaud Ansart , Cyril Baudry , Yvan Fournier , Nicolas Mérigoux , Chaï Koren , Jérome Laviéville , Nicolas Renon , Olivier Simonin","doi":"10.1016/j.powtec.2024.120018","DOIUrl":null,"url":null,"abstract":"<div><p>Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.</p><p>The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.</p><p>This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.</p><p>On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency <span><math><mo>></mo></math></span>85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.</p><p>Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.</p></div>","PeriodicalId":407,"journal":{"name":"Powder Technology","volume":null,"pages":null},"PeriodicalIF":4.5000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HPC challenges and opportunities of industrial-scale reactive fluidized bed simulation using meshes of several billion cells on the route of Exascale\",\"authors\":\"Hervé Neau , Renaud Ansart , Cyril Baudry , Yvan Fournier , Nicolas Mérigoux , Chaï Koren , Jérome Laviéville , Nicolas Renon , Olivier Simonin\",\"doi\":\"10.1016/j.powtec.2024.120018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.</p><p>The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.</p><p>This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.</p><p>On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency <span><math><mo>></mo></math></span>85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.</p><p>Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.</p></div>\",\"PeriodicalId\":407,\"journal\":{\"name\":\"Powder Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Powder Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0032591024006624\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, CHEMICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Powder Technology","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0032591024006624","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
HPC challenges and opportunities of industrial-scale reactive fluidized bed simulation using meshes of several billion cells on the route of Exascale
Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.
The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.
This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.
On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency 85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.
Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.
期刊介绍:
Powder Technology is an International Journal on the Science and Technology of Wet and Dry Particulate Systems. Powder Technology publishes papers on all aspects of the formation of particles and their characterisation and on the study of systems containing particulate solids. No limitation is imposed on the size of the particles, which may range from nanometre scale, as in pigments or aerosols, to that of mined or quarried materials. The following list of topics is not intended to be comprehensive, but rather to indicate typical subjects which fall within the scope of the journal's interests:
Formation and synthesis of particles by precipitation and other methods.
Modification of particles by agglomeration, coating, comminution and attrition.
Characterisation of the size, shape, surface area, pore structure and strength of particles and agglomerates (including the origins and effects of inter particle forces).
Packing, failure, flow and permeability of assemblies of particles.
Particle-particle interactions and suspension rheology.
Handling and processing operations such as slurry flow, fluidization, pneumatic conveying.
Interactions between particles and their environment, including delivery of particulate products to the body.
Applications of particle technology in production of pharmaceuticals, chemicals, foods, pigments, structural, and functional materials and in environmental and energy related matters.
For materials-oriented contributions we are looking for articles revealing the effect of particle/powder characteristics (size, morphology and composition, in that order) on material performance or functionality and, ideally, comparison to any industrial standard.