Zhang neural networks: an introduction to predictive computations for discretized time-varying matrix problems

IF 2.1 2区 数学 Q1 MATHEMATICS, APPLIED
{"title":"Zhang neural networks: an introduction to predictive computations for discretized time-varying matrix problems","authors":"","doi":"10.1007/s00211-023-01393-5","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>This paper wants to increase our understanding and computational know-how for time-varying matrix problems and Zhang Neural Networks. These neural networks were invented for time or single parameter-varying matrix problems around 2001 in China and almost all of their advances have been made in and most still come from its birthplace. Zhang Neural Network methods have become a backbone for solving discretized sensor driven time-varying matrix problems in real-time, in theory and in on-chip applications for robots, in control theory and other engineering applications in China. They have become the method of choice for many time-varying matrix problems that benefit from or require efficient, accurate and predictive real-time computations. A typical discretized Zhang Neural Network algorithm needs seven distinct steps in its initial set-up. The construction of discretized Zhang Neural Network algorithms starts from a model equation with its associated error equation and the stipulation that the error function decrease exponentially fast. The error function differential equation is then mated with a convergent look-ahead finite difference formula to create a distinctly new multi-step style solver that predicts the future state of the system reliably from current and earlier state and solution data. Matlab codes of discretized Zhang Neural Network algorithms for time varying matrix problems typically consist of one linear equations solve and one recursion of already available data per time step. This makes discretized Zhang Neural network based algorithms highly competitive with ordinary differential equation initial value analytic continuation methods for function given data that are designed to work adaptively. Discretized Zhang Neural Network methods have different characteristics and applicabilities than multi-step ordinary differential equations (ODEs) initial value solvers. These new time-varying matrix methods can solve matrix-given problems from sensor data with constant sampling gaps or from functional equations. To illustrate the adaptability of discretized Zhang Neural Networks and further the understanding of this method, this paper details the seven step set-up process for Zhang Neural Networks and twelve separate time-varying matrix models. It supplies new codes for seven of these. Open problems are mentioned as well as detailed references to recent work on discretized Zhang Neural Networks and time-varying matrix computations. Comparisons are given to standard non-predictive multi-step methods that use initial value problems ODE solvers and analytic continuation methods.</p>","PeriodicalId":49733,"journal":{"name":"Numerische Mathematik","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerische Mathematik","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s00211-023-01393-5","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

This paper wants to increase our understanding and computational know-how for time-varying matrix problems and Zhang Neural Networks. These neural networks were invented for time or single parameter-varying matrix problems around 2001 in China and almost all of their advances have been made in and most still come from its birthplace. Zhang Neural Network methods have become a backbone for solving discretized sensor driven time-varying matrix problems in real-time, in theory and in on-chip applications for robots, in control theory and other engineering applications in China. They have become the method of choice for many time-varying matrix problems that benefit from or require efficient, accurate and predictive real-time computations. A typical discretized Zhang Neural Network algorithm needs seven distinct steps in its initial set-up. The construction of discretized Zhang Neural Network algorithms starts from a model equation with its associated error equation and the stipulation that the error function decrease exponentially fast. The error function differential equation is then mated with a convergent look-ahead finite difference formula to create a distinctly new multi-step style solver that predicts the future state of the system reliably from current and earlier state and solution data. Matlab codes of discretized Zhang Neural Network algorithms for time varying matrix problems typically consist of one linear equations solve and one recursion of already available data per time step. This makes discretized Zhang Neural network based algorithms highly competitive with ordinary differential equation initial value analytic continuation methods for function given data that are designed to work adaptively. Discretized Zhang Neural Network methods have different characteristics and applicabilities than multi-step ordinary differential equations (ODEs) initial value solvers. These new time-varying matrix methods can solve matrix-given problems from sensor data with constant sampling gaps or from functional equations. To illustrate the adaptability of discretized Zhang Neural Networks and further the understanding of this method, this paper details the seven step set-up process for Zhang Neural Networks and twelve separate time-varying matrix models. It supplies new codes for seven of these. Open problems are mentioned as well as detailed references to recent work on discretized Zhang Neural Networks and time-varying matrix computations. Comparisons are given to standard non-predictive multi-step methods that use initial value problems ODE solvers and analytic continuation methods.

张氏神经网络:离散时变矩阵问题预测计算入门
摘要 本文旨在提高我们对时变矩阵问题和张氏神经网络的理解和计算知识。这些神经网络是 2001 年左右在中国发明的,用于解决时间或单参数变化矩阵问题。在中国,张氏神经网络方法已成为实时求解离散化传感器驱动时变矩阵问题、机器人理论和片上应用、控制理论和其他工程应用的中坚力量。它们已成为许多受益于或需要高效、准确和预测性实时计算的时变矩阵问题的首选方法。典型的离散化张氏神经网络算法在初始设置时需要七个不同的步骤。离散化张氏神经网络算法的构建始于一个模型方程及其相关误差方程,并规定误差函数以指数速度递减。然后,将误差函数微分方程与收敛性前瞻有限差分公式相结合,创建出一种全新的多步骤式求解器,该求解器可根据当前和早期的状态和求解数据可靠地预测系统的未来状态。针对时变矩阵问题的离散化张氏神经网络算法的 Matlab 代码通常包括每个时间步的一个线性方程求解和一个已有数据递归。这使得基于离散化张氏神经网络的算法与针对函数给定数据的常微分方程初值解析延续方法具有很强的竞争性,而常微分方程初值解析延续方法是为自适应工作而设计的。离散化张氏神经网络方法与多步常微分方程(ODEs)初值求解器具有不同的特点和适用性。这些新的时变矩阵方法可以解决具有恒定采样间隙的传感器数据或函数方程中的矩阵给定问题。为了说明离散化张氏神经网络的适应性并加深对这种方法的理解,本文详细介绍了张氏神经网络和 12 个独立时变矩阵模型的七步设置过程。本文提供了其中七个模型的新代码。文中还提到了尚未解决的问题,并详细介绍了离散化张氏神经网络和时变矩阵计算的最新研究成果。书中还对使用初值问题 ODE 求解器和解析延续方法的标准非预测多步方法进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Numerische Mathematik
Numerische Mathematik 数学-应用数学
CiteScore
4.10
自引率
4.80%
发文量
72
审稿时长
6-12 weeks
期刊介绍: Numerische Mathematik publishes papers of the very highest quality presenting significantly new and important developments in all areas of Numerical Analysis. "Numerical Analysis" is here understood in its most general sense, as that part of Mathematics that covers: 1. The conception and mathematical analysis of efficient numerical schemes actually used on computers (the "core" of Numerical Analysis) 2. Optimization and Control Theory 3. Mathematical Modeling 4. The mathematical aspects of Scientific Computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信