Hugo Carrillo, Taco de Wolff, Luis Martí, Nayat Sanchez-Pi
{"title":"Evolutionary multi-objective physics-informed neural networks: The MOPINNs approach","authors":"Hugo Carrillo, Taco de Wolff, Luis Martí, Nayat Sanchez-Pi","doi":"10.3233/aic-230073","DOIUrl":null,"url":null,"abstract":"Physics-informed neural networks formulation allows the neural network to be trained by both the training data and prior domain knowledge about the physical system that models the data. In particular, it has a loss function for the data and the physics, where the latter is the deviation from a partial differential equation describing the system. Conventionally, both loss functions are combined by a weighted sum, whose weights are usually chosen manually. It is known that balancing between different loss terms can make the training process more efficient. In addition, it is necessary to find the optimal architecture of the neural network in order to find a hypothesis set in which is easier to train the PINN. In our work, we propose a multi-objective optimization approach to find the optimal value for the loss function weighting, as well as the optimal activation function, number of layers, and number of neurons for each layer. We validate our results on the Poisson, Burgers, and advection-diffusion equations and show that we are able to find accurate approximations of the solutions using optimal hyperparameters.","PeriodicalId":50835,"journal":{"name":"AI Communications","volume":"5 4","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/aic-230073","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Physics-informed neural networks formulation allows the neural network to be trained by both the training data and prior domain knowledge about the physical system that models the data. In particular, it has a loss function for the data and the physics, where the latter is the deviation from a partial differential equation describing the system. Conventionally, both loss functions are combined by a weighted sum, whose weights are usually chosen manually. It is known that balancing between different loss terms can make the training process more efficient. In addition, it is necessary to find the optimal architecture of the neural network in order to find a hypothesis set in which is easier to train the PINN. In our work, we propose a multi-objective optimization approach to find the optimal value for the loss function weighting, as well as the optimal activation function, number of layers, and number of neurons for each layer. We validate our results on the Poisson, Burgers, and advection-diffusion equations and show that we are able to find accurate approximations of the solutions using optimal hyperparameters.
期刊介绍:
AI Communications is a journal on artificial intelligence (AI) which has a close relationship to EurAI (European Association for Artificial Intelligence, formerly ECCAI). It covers the whole AI community: Scientific institutions as well as commercial and industrial companies.
AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news.