基于张量分解的动态模态分解神经算子用于参数化时相关问题

IF 3.8 2区 物理与天体物理 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Yuanhong Chen , Yifan Lin , Xiang Sun , Chunxin Yuan , Zhen Gao
{"title":"基于张量分解的动态模态分解神经算子用于参数化时相关问题","authors":"Yuanhong Chen ,&nbsp;Yifan Lin ,&nbsp;Xiang Sun ,&nbsp;Chunxin Yuan ,&nbsp;Zhen Gao","doi":"10.1016/j.jcp.2025.113996","DOIUrl":null,"url":null,"abstract":"<div><div>Deep operator networks (DeepONets), as a powerful tool to approximate nonlinear mappings between different function spaces, have gained significant attention recently for applications in modeling parameterized partial differential equations. However, limited by the poor extrapolation ability of purely data-driven neural operators, these models tend to fail in predicting solutions with high accuracy outside the training time interval. To address this issue, a novel operator learning framework, TDMD-DeepONet, is proposed in this work, based on tensor train decomposition (TTD) and dynamic mode decomposition (DMD). We first demonstrate the mathematical agreement of the representation of TTD and DeepONet. Then the TTD is performed on a higher-order tensor consisting of given spatial-temporal snapshots collected under a set of parameter values to generate the parameter-, space- and time-dependent cores. DMD is then utilized to model the evolution of the time-dependent core, which is combined with the space-dependent cores to represent the trunk net. Similar to DeepONet, the branch net employs a neural network, with the parameters as inputs and outputs merged with the trunk net for prediction. Furthermore, the feature-enhanced TDMD-DeepONet (ETDMD-DeepONet) is proposed to improve the accuracy, in which an additional linear layer is incorporated into the branch network compared with TDMD-DeepONet. The input to the linear layer is obtained by projecting the initial conditions onto the trunk network. The proposed methods' good performance is demonstrated through several classical examples, in which the results demonstrate that the new methods are more accurate in forecasting solutions than the standard DeepONet.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"533 ","pages":"Article 113996"},"PeriodicalIF":3.8000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tensor decomposition-based neural operator with dynamic mode decomposition for parameterized time-dependent problems\",\"authors\":\"Yuanhong Chen ,&nbsp;Yifan Lin ,&nbsp;Xiang Sun ,&nbsp;Chunxin Yuan ,&nbsp;Zhen Gao\",\"doi\":\"10.1016/j.jcp.2025.113996\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep operator networks (DeepONets), as a powerful tool to approximate nonlinear mappings between different function spaces, have gained significant attention recently for applications in modeling parameterized partial differential equations. However, limited by the poor extrapolation ability of purely data-driven neural operators, these models tend to fail in predicting solutions with high accuracy outside the training time interval. To address this issue, a novel operator learning framework, TDMD-DeepONet, is proposed in this work, based on tensor train decomposition (TTD) and dynamic mode decomposition (DMD). We first demonstrate the mathematical agreement of the representation of TTD and DeepONet. Then the TTD is performed on a higher-order tensor consisting of given spatial-temporal snapshots collected under a set of parameter values to generate the parameter-, space- and time-dependent cores. DMD is then utilized to model the evolution of the time-dependent core, which is combined with the space-dependent cores to represent the trunk net. Similar to DeepONet, the branch net employs a neural network, with the parameters as inputs and outputs merged with the trunk net for prediction. Furthermore, the feature-enhanced TDMD-DeepONet (ETDMD-DeepONet) is proposed to improve the accuracy, in which an additional linear layer is incorporated into the branch network compared with TDMD-DeepONet. The input to the linear layer is obtained by projecting the initial conditions onto the trunk network. The proposed methods' good performance is demonstrated through several classical examples, in which the results demonstrate that the new methods are more accurate in forecasting solutions than the standard DeepONet.</div></div>\",\"PeriodicalId\":352,\"journal\":{\"name\":\"Journal of Computational Physics\",\"volume\":\"533 \",\"pages\":\"Article 113996\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Physics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0021999125002797\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021999125002797","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

深度算子网络(DeepONets)作为一种逼近不同函数空间之间非线性映射的强大工具,近年来在参数化偏微分方程建模方面的应用受到了广泛关注。然而,由于纯数据驱动的神经算子的外推能力较差,这些模型往往无法在训练时间间隔之外预测出高精度的解。为了解决这一问题,本文提出了一种基于张量列分解(TTD)和动态模态分解(DMD)的算子学习框架TDMD-DeepONet。我们首先证明了TTD和DeepONet表示的数学一致性。然后在高阶张量上执行TTD,该张量由一组参数值下收集的给定时空快照组成,以生成参数相关、空间相关和时间相关的核心。然后利用DMD对时变核的演化进行建模,并将时变核与空变核相结合来表示主干网。与DeepONet类似,分支网络采用神经网络,将参数作为输入和输出与主干网络合并进行预测。此外,为了提高精度,提出了特征增强的TDMD-DeepONet (ETDMD-DeepONet),与TDMD-DeepONet相比,在分支网络中增加了一个额外的线性层。线性层的输入是通过将初始条件投影到主干网络来获得的。通过几个经典实例证明了所提方法的良好性能,结果表明,新方法的预测解比标准DeepONet更准确。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Tensor decomposition-based neural operator with dynamic mode decomposition for parameterized time-dependent problems
Deep operator networks (DeepONets), as a powerful tool to approximate nonlinear mappings between different function spaces, have gained significant attention recently for applications in modeling parameterized partial differential equations. However, limited by the poor extrapolation ability of purely data-driven neural operators, these models tend to fail in predicting solutions with high accuracy outside the training time interval. To address this issue, a novel operator learning framework, TDMD-DeepONet, is proposed in this work, based on tensor train decomposition (TTD) and dynamic mode decomposition (DMD). We first demonstrate the mathematical agreement of the representation of TTD and DeepONet. Then the TTD is performed on a higher-order tensor consisting of given spatial-temporal snapshots collected under a set of parameter values to generate the parameter-, space- and time-dependent cores. DMD is then utilized to model the evolution of the time-dependent core, which is combined with the space-dependent cores to represent the trunk net. Similar to DeepONet, the branch net employs a neural network, with the parameters as inputs and outputs merged with the trunk net for prediction. Furthermore, the feature-enhanced TDMD-DeepONet (ETDMD-DeepONet) is proposed to improve the accuracy, in which an additional linear layer is incorporated into the branch network compared with TDMD-DeepONet. The input to the linear layer is obtained by projecting the initial conditions onto the trunk network. The proposed methods' good performance is demonstrated through several classical examples, in which the results demonstrate that the new methods are more accurate in forecasting solutions than the standard DeepONet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Computational Physics
Journal of Computational Physics 物理-计算机:跨学科应用
CiteScore
7.60
自引率
14.60%
发文量
763
审稿时长
5.8 months
期刊介绍: Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries. The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信