Projection-based model-order reduction for unstructured meshes with graph autoencoders

Liam K. MagargalDepartment of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States, Parisa KhodabakhshiDepartment of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States, Steven N. RodriguezComputational Multiphysics Systems Laboratory, United States Naval Research Laboratory, Washington, DC, United States, Justin W. JaworskiKevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA, United States, John G. MichopoulosComputational Multiphysics Systems Laboratory, United States Naval Research Laboratory, Washington, DC, United States
{"title":"Projection-based model-order reduction for unstructured meshes with graph autoencoders","authors":"Liam K. MagargalDepartment of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States, Parisa KhodabakhshiDepartment of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States, Steven N. RodriguezComputational Multiphysics Systems Laboratory, United States Naval Research Laboratory, Washington, DC, United States, Justin W. JaworskiKevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA, United States, John G. MichopoulosComputational Multiphysics Systems Laboratory, United States Naval Research Laboratory, Washington, DC, United States","doi":"arxiv-2407.13669","DOIUrl":null,"url":null,"abstract":"This paper presents a graph autoencoder architecture capable of performing\nprojection-based model-order reduction (PMOR) on advection-dominated flows\nmodeled by unstructured meshes. The autoencoder is coupled with the time\nintegration scheme from a traditional deep least-squares Petrov-Galerkin\nprojection and provides the first deployment of a graph autoencoder into a PMOR\nframework. The presented graph autoencoder is constructed with a two-part\nprocess that consists of (1) generating a hierarchy of reduced graphs to\nemulate the compressive abilities of convolutional neural networks (CNNs) and\n(2) training a message passing operation at each step in the hierarchy of\nreduced graphs to emulate the filtering process of a CNN. The resulting\nframework provides improved flexibility over traditional CNN-based autoencoders\nbecause it is extendable to unstructured meshes. To highlight the capabilities\nof the proposed framework, which is named geometric deep least-squares\nPetrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional\nBurgers' equation problem with a structured mesh and demonstrate the\nflexibility of GD-LSPG by deploying it to a two-dimensional Euler equations\nmodel that uses an unstructured mesh. The proposed framework provides\nconsiderable improvement in accuracy for very low-dimensional latent spaces in\ncomparison with traditional affine projections.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.13669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) on advection-dominated flows modeled by unstructured meshes. The autoencoder is coupled with the time integration scheme from a traditional deep least-squares Petrov-Galerkin projection and provides the first deployment of a graph autoencoder into a PMOR framework. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' equation problem with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it to a two-dimensional Euler equations model that uses an unstructured mesh. The proposed framework provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.
利用图自编码器对非结构网格进行基于投影的模型阶次缩减
本文提出了一种图自动编码器架构,能够对非结构网格建模的平流主导流进行基于投影的模型阶次缩减(PMOR)。该自动编码器与传统深最小二乘 Petrov-Galerkin 投影的时间积分方案相结合,首次将图自动编码器应用到 PMOR 框架中。所介绍的图自动编码器由两部分过程构建而成,包括:(1) 生成还原图层次结构,以模拟卷积神经网络(CNN)的压缩能力;(2) 在还原图层次结构的每一步训练消息传递操作,以模拟 CNN 的过滤过程。与传统的基于 CNN 的自动编码器相比,该框架具有更高的灵活性,因为它可以扩展到非结构化网格。为了突出所提框架(命名为几何深最小二乘Petrov-Galerkin(GD-LSPG))的能力,我们在使用结构网格的一维伯格斯方程问题上对该方法进行了基准测试,并通过将其部署到使用非结构网格的二维欧拉方程模型上,展示了GD-LSPG的灵活性。与传统的仿射投影相比,所提出的框架大大提高了超低维潜在空间的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信