Ghazal Farhani, Nima Hosseini Dashtbayaz, Alexander Kazachek, Boyu Wang
{"title":"A simple remedy for failure modes in physics informed neural networks.","authors":"Ghazal Farhani, Nima Hosseini Dashtbayaz, Alexander Kazachek, Boyu Wang","doi":"10.1016/j.neunet.2024.106963","DOIUrl":null,"url":null,"abstract":"<p><p>Physics-informed neural networks (PINNs) have shown promising results in solving a wide range of problems involving partial differential equations (PDEs). Nevertheless, there are several instances of the failure of PINNs when PDEs become more complex. Particularly, when PDE coefficients grow larger or PDEs become increasingly nonlinear, PINNs struggle to converge to the true solution. A noticeable discrepancy emerges in the convergence speed between the PDE loss and the initial/boundary conditions loss, leading to the inability of PINNs to effectively learn the true solutions to these PDEs. In the present work, leveraging the neural tangent kernels (NTKs), we investigate the training dynamics of PINNs. Our theoretical analysis reveals that when PINNs are trained using gradient descent with momentum (GDM), the gap in convergence rates between the two loss terms is significantly reduced, thereby enabling the learning of the exact solution. We also examine why training a model via the Adam optimizer can accelerate the convergence and reduce the effect of the mentioned discrepancy. Our numerical experiments validate that sufficiently wide networks trained with GDM and Adam yield desirable solutions for more complex PDEs.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106963"},"PeriodicalIF":6.0000,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106963","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Physics-informed neural networks (PINNs) have shown promising results in solving a wide range of problems involving partial differential equations (PDEs). Nevertheless, there are several instances of the failure of PINNs when PDEs become more complex. Particularly, when PDE coefficients grow larger or PDEs become increasingly nonlinear, PINNs struggle to converge to the true solution. A noticeable discrepancy emerges in the convergence speed between the PDE loss and the initial/boundary conditions loss, leading to the inability of PINNs to effectively learn the true solutions to these PDEs. In the present work, leveraging the neural tangent kernels (NTKs), we investigate the training dynamics of PINNs. Our theoretical analysis reveals that when PINNs are trained using gradient descent with momentum (GDM), the gap in convergence rates between the two loss terms is significantly reduced, thereby enabling the learning of the exact solution. We also examine why training a model via the Adam optimizer can accelerate the convergence and reduce the effect of the mentioned discrepancy. Our numerical experiments validate that sufficiently wide networks trained with GDM and Adam yield desirable solutions for more complex PDEs.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.