Simulation Approximators Using Linear and Nonlinear Integration Neural Networks

Yoshiharu Iwata, Kouji Fujishiro, Hidefumi Wakamatsu
{"title":"Simulation Approximators Using Linear and Nonlinear Integration Neural Networks","authors":"Yoshiharu Iwata, Kouji Fujishiro, Hidefumi Wakamatsu","doi":"10.5687/iscie.36.243","DOIUrl":null,"url":null,"abstract":"The construction of approximators for simulations, such as the finite element method using machine learning, has the problem of both reducing training data generation time and achieving approximation accuracy. Hybrid neural networks have been proposed to solve this problem as a fast approximator for simulation. Even if built with a simple perceptron with a linear activation function created based on deductive knowledge using conventional approximation techniques such as multiple regression analysis, the number of phenomena that can be modeled by deductive knowledge is limited in simulations of complex structures. As a result, there will be errors in the predictions of the approximators. In contrast, hybrid neural networks allow neural networks to learn errors in predictions to create correction approximators, allowing approximators to account for effects that cannot be expressed by multiple regression analysis. This paper proposes a neural network with a structure that integrates these approximators. The first proposed Hybrid Neural Network (HNN) approximator trains a linear approximator, and then a nonlinear approximator learns the error part. In contrast, the Integration Neural Network (INN) simultaneously learns the linear and nonlinear approximators to optimize the learning ratio by training. This method allows INNs to improve the accuracy of approximators and reduce the conflict between the number of training data and accuracy.","PeriodicalId":489536,"journal":{"name":"Shisutemu Seigyo Jōhō Gakkai ronbunshi","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Shisutemu Seigyo Jōhō Gakkai ronbunshi","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5687/iscie.36.243","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The construction of approximators for simulations, such as the finite element method using machine learning, has the problem of both reducing training data generation time and achieving approximation accuracy. Hybrid neural networks have been proposed to solve this problem as a fast approximator for simulation. Even if built with a simple perceptron with a linear activation function created based on deductive knowledge using conventional approximation techniques such as multiple regression analysis, the number of phenomena that can be modeled by deductive knowledge is limited in simulations of complex structures. As a result, there will be errors in the predictions of the approximators. In contrast, hybrid neural networks allow neural networks to learn errors in predictions to create correction approximators, allowing approximators to account for effects that cannot be expressed by multiple regression analysis. This paper proposes a neural network with a structure that integrates these approximators. The first proposed Hybrid Neural Network (HNN) approximator trains a linear approximator, and then a nonlinear approximator learns the error part. In contrast, the Integration Neural Network (INN) simultaneously learns the linear and nonlinear approximators to optimize the learning ratio by training. This method allows INNs to improve the accuracy of approximators and reduce the conflict between the number of training data and accuracy.
利用线性和非线性积分神经网络模拟逼近器
模拟近似器的构建,如使用机器学习的有限元方法,存在减少训练数据生成时间和实现近似精度的问题。为了解决这一问题,混合神经网络作为一种快速的仿真逼近器被提出。即使用一个简单的感知器构建一个线性激活函数,该感知器是基于使用传统近似技术(如多元回归分析)的演绎知识创建的,可以通过演绎知识建模的现象的数量在复杂结构的模拟中是有限的。因此,在逼近器的预测中会有误差。相比之下,混合神经网络允许神经网络学习预测中的错误以创建校正近似器,允许近似器解释无法通过多元回归分析表达的影响。本文提出了一种神经网络,其结构集成了这些逼近器。首先提出混合神经网络(HNN)逼近器训练线性逼近器,然后非线性逼近器学习误差部分。相反,积分神经网络(INN)同时学习线性和非线性逼近器,通过训练来优化学习率。这种方法可以提高逼近器的精度,减少训练数据数量与精度之间的冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信