{"title":"Mesh-based Super-Resolution of Fluid Flows with Multiscale Graph Neural Networks","authors":"Shivam Barwey, Pinaki Pal, Saumil Patel, Riccardo Balin, Bethany Lusch, Venkatram Vishwanath, Romit Maulik, Ramesh Balakrishnan","doi":"arxiv-2409.07769","DOIUrl":null,"url":null,"abstract":"A graph neural network (GNN) approach is introduced in this work which\nenables mesh-based three-dimensional super-resolution of fluid flows. In this\nframework, the GNN is designed to operate not on the full mesh-based field at\nonce, but on localized meshes of elements (or cells) directly. To facilitate\nmesh-based GNN representations in a manner similar to spectral (or finite)\nelement discretizations, a baseline GNN layer (termed a message passing layer,\nwhich updates local node properties) is modified to account for synchronization\nof coincident graph nodes, rendering compatibility with commonly used\nelement-based mesh connectivities. The architecture is multiscale in nature,\nand is comprised of a combination of coarse-scale and fine-scale message\npassing layer sequences (termed processors) separated by a graph unpooling\nlayer. The coarse-scale processor embeds a query element (alongside a set\nnumber of neighboring coarse elements) into a single latent graph\nrepresentation using coarse-scale synchronized message passing over the element\nneighborhood, and the fine-scale processor leverages additional message passing\noperations on this latent graph to correct for interpolation errors.\nDemonstration studies are performed using hexahedral mesh-based data from\nTaylor-Green Vortex flow simulations at Reynolds numbers of 1600 and 3200.\nThrough analysis of both global and local errors, the results ultimately show\nhow the GNN is able to produce accurate super-resolved fields compared to\ntargets in both coarse-scale and multiscale model configurations.\nReconstruction errors for fixed architectures were found to increase in\nproportion to the Reynolds number, while the inclusion of surrounding coarse\nelement neighbors was found to improve predictions at Re=1600, but not at\nRe=3200.","PeriodicalId":501125,"journal":{"name":"arXiv - PHYS - Fluid Dynamics","volume":"59 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Fluid Dynamics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A graph neural network (GNN) approach is introduced in this work which
enables mesh-based three-dimensional super-resolution of fluid flows. In this
framework, the GNN is designed to operate not on the full mesh-based field at
once, but on localized meshes of elements (or cells) directly. To facilitate
mesh-based GNN representations in a manner similar to spectral (or finite)
element discretizations, a baseline GNN layer (termed a message passing layer,
which updates local node properties) is modified to account for synchronization
of coincident graph nodes, rendering compatibility with commonly used
element-based mesh connectivities. The architecture is multiscale in nature,
and is comprised of a combination of coarse-scale and fine-scale message
passing layer sequences (termed processors) separated by a graph unpooling
layer. The coarse-scale processor embeds a query element (alongside a set
number of neighboring coarse elements) into a single latent graph
representation using coarse-scale synchronized message passing over the element
neighborhood, and the fine-scale processor leverages additional message passing
operations on this latent graph to correct for interpolation errors.
Demonstration studies are performed using hexahedral mesh-based data from
Taylor-Green Vortex flow simulations at Reynolds numbers of 1600 and 3200.
Through analysis of both global and local errors, the results ultimately show
how the GNN is able to produce accurate super-resolved fields compared to
targets in both coarse-scale and multiscale model configurations.
Reconstruction errors for fixed architectures were found to increase in
proportion to the Reynolds number, while the inclusion of surrounding coarse
element neighbors was found to improve predictions at Re=1600, but not at
Re=3200.