Dynamical system response under Gaussian and Poisson white noises solved by deep neural network method with adaptive task decomposition and progressive learning strategy.

IF 2.4 3区 物理与天体物理 Q1 Mathematics
Wantao Jia, Xiaotong Feng, Yifan Zhao, Wanrong Zan
{"title":"Dynamical system response under Gaussian and Poisson white noises solved by deep neural network method with adaptive task decomposition and progressive learning strategy.","authors":"Wantao Jia, Xiaotong Feng, Yifan Zhao, Wanrong Zan","doi":"10.1103/PhysRevE.111.045309","DOIUrl":null,"url":null,"abstract":"<p><p>The forward Kolmogorov equation corresponding to a system under the combined excitation of Gaussian and Poisson white noises is an integrodifferential equation (IDE). In our recent study, we introduced GL-PINNs, which integrates the Gauss-Legendre (GL) quadrature with the physics-informed neural networks (PINNs) framework for solving time-dependent IDEs. However, we observed that in scenarios such as insufficient learning of initial conditions or dynamical systems with strong temporal dependencies, GL-PINNs produced inaccurate solutions despite achieving low training loss values. This issue primarily stems from the GL-PINNs framework's failure to account for temporal causality. To address this limitation, we develop a deep neural network method called ATD-GLPINNs, which integrates adaptive task decomposition and progressive learning strategy. This approach decomposes the complex task along the time axis into an initial subtask and several extra tasks, which enable progressive learning through adaptive adjustment of task parameters. As an extension of the GL-PINNs, this algorithm adheres to temporal causality by prioritizing the training of early subtasks and dynamically allocating additional computational resources to subsequent ones. Numerical experiments demonstrate that our suggested method converges with significantly fewer training epochs compared to GL-PINNs, making it not only more efficient and robust, but also capable of reducing computational costs while improving prediction accuracy.</p>","PeriodicalId":20085,"journal":{"name":"Physical review. E","volume":"111 4-2","pages":"045309"},"PeriodicalIF":2.4000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical review. E","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1103/PhysRevE.111.045309","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

Abstract

The forward Kolmogorov equation corresponding to a system under the combined excitation of Gaussian and Poisson white noises is an integrodifferential equation (IDE). In our recent study, we introduced GL-PINNs, which integrates the Gauss-Legendre (GL) quadrature with the physics-informed neural networks (PINNs) framework for solving time-dependent IDEs. However, we observed that in scenarios such as insufficient learning of initial conditions or dynamical systems with strong temporal dependencies, GL-PINNs produced inaccurate solutions despite achieving low training loss values. This issue primarily stems from the GL-PINNs framework's failure to account for temporal causality. To address this limitation, we develop a deep neural network method called ATD-GLPINNs, which integrates adaptive task decomposition and progressive learning strategy. This approach decomposes the complex task along the time axis into an initial subtask and several extra tasks, which enable progressive learning through adaptive adjustment of task parameters. As an extension of the GL-PINNs, this algorithm adheres to temporal causality by prioritizing the training of early subtasks and dynamically allocating additional computational resources to subsequent ones. Numerical experiments demonstrate that our suggested method converges with significantly fewer training epochs compared to GL-PINNs, making it not only more efficient and robust, but also capable of reducing computational costs while improving prediction accuracy.

采用自适应任务分解和渐进式学习策略的深度神经网络方法求解高斯白噪声和泊松白噪声下的动力系统响应。
高斯白噪声和泊松白噪声联合激励下的系统所对应的正演Kolmogorov方程是一个积分微分方程(IDE)。在我们最近的研究中,我们引入了GL- pinn,它将高斯-勒让德(GL)正交与物理信息神经网络(pinn)框架集成在一起,用于求解时间依赖性ide。然而,我们观察到,在初始条件学习不足或具有强时间依赖性的动态系统等情况下,尽管获得了较低的训练损失值,但gl - pinn产生了不准确的解。这一问题主要源于gl - pinn框架未能解释时间因果关系。为了解决这一限制,我们开发了一种称为atd - glpinn的深度神经网络方法,该方法集成了自适应任务分解和渐进式学习策略。该方法将复杂任务沿时间轴分解为初始子任务和多个附加任务,通过自适应调整任务参数实现渐进式学习。作为gl - pinn的扩展,该算法通过优先训练早期子任务并动态分配额外的计算资源给后续子任务来遵循时间因果关系。数值实验表明,与GL-PINNs相比,该方法的收敛次数显著减少,不仅效率更高,鲁棒性更强,而且能够在提高预测精度的同时降低计算成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physical review. E
Physical review. E 物理-物理:流体与等离子体
CiteScore
4.60
自引率
16.70%
发文量
0
审稿时长
3.3 months
期刊介绍: Physical Review E (PRE), broad and interdisciplinary in scope, focuses on collective phenomena of many-body systems, with statistical physics and nonlinear dynamics as the central themes of the journal. Physical Review E publishes recent developments in biological and soft matter physics including granular materials, colloids, complex fluids, liquid crystals, and polymers. The journal covers fluid dynamics and plasma physics and includes sections on computational and interdisciplinary physics, for example, complex networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信