S. A. Faroughi, Nikhil M. Pawar, Célio Fernandes, Maziar Raissi, Subasish Das, Nima K. Kalantari, S. K. Mahjour
{"title":"Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks and Operators in Scientific Computing: Fluid and Solid Mechanics","authors":"S. A. Faroughi, Nikhil M. Pawar, Célio Fernandes, Maziar Raissi, Subasish Das, Nima K. Kalantari, S. K. Mahjour","doi":"10.1115/1.4064449","DOIUrl":null,"url":null,"abstract":"\n Advancements in computing power have recently made it possible to utilize machine learning and deep learning to push scientific computing forward in a range of disciplines, such as fluid mechanics, solid mechanics, materials science, etc. The incorporation of neural networks is particularly crucial in this hybridization process. Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data is sparse, which is the case in many scientific and engineering domains. Nonetheless, neural networks provide a solid foundation to respect physics-driven or knowledge-based constraints during training. Generally speaking, there are three distinct neural network frameworks to enforce the underlying physics: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). These methods provide distinct advantages for accelerating the numerical modeling of complex multiscale multi-physics phenomena. In addition, the recent developments in neural operators (NOs) add another dimension to these new simulation paradigms, especially when the real-time prediction of complex multi-physics systems is required. All these models also come with their own unique drawbacks and limitations that call for further fundamental research. This study aims to present a review of the four neural network frameworks (i.e., PgNNs, PiNNs, PeNNs, and NOs) used in scientific computing research. The state-of-the-art architectures and their applications are reviewed, limitations are discussed, and future research opportunities are presented in terms of improving algorithms, considering causalities, expanding applications, and coupling scientific and deep learning solvers.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"28 12","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computing and Information Science in Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4064449","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2
Abstract
Advancements in computing power have recently made it possible to utilize machine learning and deep learning to push scientific computing forward in a range of disciplines, such as fluid mechanics, solid mechanics, materials science, etc. The incorporation of neural networks is particularly crucial in this hybridization process. Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data is sparse, which is the case in many scientific and engineering domains. Nonetheless, neural networks provide a solid foundation to respect physics-driven or knowledge-based constraints during training. Generally speaking, there are three distinct neural network frameworks to enforce the underlying physics: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). These methods provide distinct advantages for accelerating the numerical modeling of complex multiscale multi-physics phenomena. In addition, the recent developments in neural operators (NOs) add another dimension to these new simulation paradigms, especially when the real-time prediction of complex multi-physics systems is required. All these models also come with their own unique drawbacks and limitations that call for further fundamental research. This study aims to present a review of the four neural network frameworks (i.e., PgNNs, PiNNs, PeNNs, and NOs) used in scientific computing research. The state-of-the-art architectures and their applications are reviewed, limitations are discussed, and future research opportunities are presented in terms of improving algorithms, considering causalities, expanding applications, and coupling scientific and deep learning solvers.
期刊介绍:
The ASME Journal of Computing and Information Science in Engineering (JCISE) publishes articles related to Algorithms, Computational Methods, Computing Infrastructure, Computer-Interpretable Representations, Human-Computer Interfaces, Information Science, and/or System Architectures that aim to improve some aspect of product and system lifecycle (e.g., design, manufacturing, operation, maintenance, disposal, recycling etc.). Applications considered in JCISE manuscripts should be relevant to the mechanical engineering discipline. Papers can be focused on fundamental research leading to new methods, or adaptation of existing methods for new applications.
Scope: Advanced Computing Infrastructure; Artificial Intelligence; Big Data and Analytics; Collaborative Design; Computer Aided Design; Computer Aided Engineering; Computer Aided Manufacturing; Computational Foundations for Additive Manufacturing; Computational Foundations for Engineering Optimization; Computational Geometry; Computational Metrology; Computational Synthesis; Conceptual Design; Cybermanufacturing; Cyber Physical Security for Factories; Cyber Physical System Design and Operation; Data-Driven Engineering Applications; Engineering Informatics; Geometric Reasoning; GPU Computing for Design and Manufacturing; Human Computer Interfaces/Interactions; Industrial Internet of Things; Knowledge Engineering; Information Management; Inverse Methods for Engineering Applications; Machine Learning for Engineering Applications; Manufacturing Planning; Manufacturing Automation; Model-based Systems Engineering; Multiphysics Modeling and Simulation; Multiscale Modeling and Simulation; Multidisciplinary Optimization; Physics-Based Simulations; Process Modeling for Engineering Applications; Qualification, Verification and Validation of Computational Models; Symbolic Computing for Engineering Applications; Tolerance Modeling; Topology and Shape Optimization; Virtual and Augmented Reality Environments; Virtual Prototyping