Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations.

IF 3.3 2区 数学 Q1 MATHEMATICS, APPLIED
Journal of Scientific Computing Pub Date : 2025-08-01 Epub Date: 2025-06-24 DOI:10.1007/s10915-025-02965-3
Chuqi Chen, Yahong Yang, Yang Xiang, Wenrui Hao
{"title":"Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations.","authors":"Chuqi Chen, Yahong Yang, Yang Xiang, Wenrui Hao","doi":"10.1007/s10915-025-02965-3","DOIUrl":null,"url":null,"abstract":"<p><p>Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or incorporation of empirical data. One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving PDEs.</p>","PeriodicalId":50055,"journal":{"name":"Journal of Scientific Computing","volume":"104 2","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12407148/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Scientific Computing","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10915-025-02965-3","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/24 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or incorporation of empirical data. One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving PDEs.

自动微分是训练神经网络求解微分方程的关键。
最近,基于神经网络的方法在解决科学和工程中的偏微分方程(PDEs)方面显示出了巨大的前景,特别是在具有复杂领域或结合经验数据的场景中。神经网络方法用于偏微分方程的一个优点在于它的自动微分(AD),它只需要样本点本身,而不像传统的有限差分(FD)近似需要附近的局部点计算导数。在本文中,我们定量地证明了AD在训练神经网络中的优势。引入截断熵的概念来表征训练性质。具体来说,通过对随机特征模型和双层神经网络进行全面的实验和理论分析,我们发现定义的截断熵是量化随机特征模型残差损失和神经网络训练速度的可靠度量。我们的实验和理论分析表明,从训练的角度来看,AD在解决偏微分方程方面优于FD。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Scientific Computing
Journal of Scientific Computing 数学-应用数学
CiteScore
4.00
自引率
12.00%
发文量
302
审稿时长
4-8 weeks
期刊介绍: Journal of Scientific Computing is an international interdisciplinary forum for the publication of papers on state-of-the-art developments in scientific computing and its applications in science and engineering. The journal publishes high-quality, peer-reviewed original papers, review papers and short communications on scientific computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信