{"title":"LatentPINNs: Generative physics-informed neural networks via a latent representation learning","authors":"Mohammad H. Taufik, Tariq Alkhalifah","doi":"10.1016/j.aiig.2025.100115","DOIUrl":null,"url":null,"abstract":"<div><div>Physics-informed neural networks (PINNs) are promising to replace conventional mesh-based partial differential equation (PDE) solvers by offering more accurate and flexible PDE solutions. However, PINNs are hampered by the relatively slow convergence and the need to perform additional, potentially expensive training for new PDE parameters. To solve this limitation, we introduce LatentPINN, a framework that utilizes latent representations of the PDE parameters as additional (to the coordinates) inputs into PINNs and allows for training over the distribution of these parameters. Motivated by the recent progress on generative models, we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions. We use a two-stage training scheme in which, in the first stage, we learn the latent representations for the distribution of PDE parameters. In the second stage, we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters. Considering their importance in capturing evolving interfaces and fronts in various fields, we test the approach on a class of level set equations given, for example, by the nonlinear Eikonal equation. We share results corresponding to three Eikonal parameters (velocity models) sets. The proposed method performs well on new phase velocity models without the need for any additional training.</div></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"6 1","pages":"Article 100115"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Geosciences","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666544125000115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Physics-informed neural networks (PINNs) are promising to replace conventional mesh-based partial differential equation (PDE) solvers by offering more accurate and flexible PDE solutions. However, PINNs are hampered by the relatively slow convergence and the need to perform additional, potentially expensive training for new PDE parameters. To solve this limitation, we introduce LatentPINN, a framework that utilizes latent representations of the PDE parameters as additional (to the coordinates) inputs into PINNs and allows for training over the distribution of these parameters. Motivated by the recent progress on generative models, we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions. We use a two-stage training scheme in which, in the first stage, we learn the latent representations for the distribution of PDE parameters. In the second stage, we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters. Considering their importance in capturing evolving interfaces and fronts in various fields, we test the approach on a class of level set equations given, for example, by the nonlinear Eikonal equation. We share results corresponding to three Eikonal parameters (velocity models) sets. The proposed method performs well on new phase velocity models without the need for any additional training.