{"title":"Multi-Fidelity Physics-Constrained Neural Networks With Minimax Architecture for Materials Modeling","authors":"Dehao Liu, Pranav Pusarla, Yan Wang","doi":"10.1115/detc2022-91219","DOIUrl":null,"url":null,"abstract":"\n Data sparsity is still the main challenge to apply machine learning models to solve complex scientific and engineering problems. The root cause is the “curse of dimensionality” in training these models. Training algorithms need to explore and exploit in a very high dimensional parameter space to search the optimal parameters for complex models. In this work, a new scheme of multi-fidelity physics-constrained neural networks with minimax architecture is proposed to improve the data efficiency of training neural networks by incorporating physical knowledge as constraints and sampling data with various fidelities. In this new framework, fully-connected neural networks with two levels of fidelities are combined to improve the prediction accuracy. The low-fidelity neural network is used to approximate the low-fidelity data, whereas the high-fidelity neural network is adopted to approximate the correlation function between the low-fidelity and high-fidelity data. To systematically search the optimal weights of various losses for reducing the training time, the Dual-Dimer algorithm is adopted to search high-order saddle points of the minimax optimization problem. The proposed framework is demonstrated with two-dimensional heat transfer, phase transition, and dendritic growth problems, which are fundamental in materials modeling. With the same set of training data, the prediction error of the multi-fidelity physics-constrained neural network with minimax architecture can be two orders of magnitude lower than that of the multi-fidelity neural network with minimax architecture.","PeriodicalId":382970,"journal":{"name":"Volume 2: 42nd Computers and Information in Engineering Conference (CIE)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 2: 42nd Computers and Information in Engineering Conference (CIE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2022-91219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Data sparsity is still the main challenge to apply machine learning models to solve complex scientific and engineering problems. The root cause is the “curse of dimensionality” in training these models. Training algorithms need to explore and exploit in a very high dimensional parameter space to search the optimal parameters for complex models. In this work, a new scheme of multi-fidelity physics-constrained neural networks with minimax architecture is proposed to improve the data efficiency of training neural networks by incorporating physical knowledge as constraints and sampling data with various fidelities. In this new framework, fully-connected neural networks with two levels of fidelities are combined to improve the prediction accuracy. The low-fidelity neural network is used to approximate the low-fidelity data, whereas the high-fidelity neural network is adopted to approximate the correlation function between the low-fidelity and high-fidelity data. To systematically search the optimal weights of various losses for reducing the training time, the Dual-Dimer algorithm is adopted to search high-order saddle points of the minimax optimization problem. The proposed framework is demonstrated with two-dimensional heat transfer, phase transition, and dendritic growth problems, which are fundamental in materials modeling. With the same set of training data, the prediction error of the multi-fidelity physics-constrained neural network with minimax architecture can be two orders of magnitude lower than that of the multi-fidelity neural network with minimax architecture.