Yunong Zhang, Ying Fang, Bolin Liao, Tianjian Qiao, Hongzhou Tan
{"title":"基于泰勒有限差分公式的立方体稳态误差模式的未来最小化新DTZNN模型","authors":"Yunong Zhang, Ying Fang, Bolin Liao, Tianjian Qiao, Hongzhou Tan","doi":"10.1109/ICICIP.2015.7388156","DOIUrl":null,"url":null,"abstract":"In this paper, a discrete-time Zhang neural network (DTZNN) model, discretized from continuous-time Zhang neural network, is proposed and investigated for performing the online future minimization (OFM). In order to approximate more accurately the 1st-order derivative in computation and discretize more effectively the continuous-time Zhang neural network, a new Taylor-type numerical differentiation formula, together with the optimal sampling-gap rule, is presented and utilized to obtain the Taylor-type DTZNN model. For comparison, Euler-type DTZNN model and Newton iteration, with an interesting link being found, are also presented. Moreover, theoretical results of stability and convergence are presented, which show that the steady-state residual errors of the presented Taylor-type DTZNN model, Euler-type DTZNN model and Newton iteration have a pattern of 0(t3), 0(t2) and 0(t), respectively, with t denoting the sampling gap. Numerical experimental results further substantiate the effectiveness and advantages of the Taylor-type DTZNN model for solving the OFM problem.","PeriodicalId":265426,"journal":{"name":"2015 Sixth International Conference on Intelligent Control and Information Processing (ICICIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"New DTZNN model for future minimization with cube steady-state error pattern using Taylor finite-difference formula\",\"authors\":\"Yunong Zhang, Ying Fang, Bolin Liao, Tianjian Qiao, Hongzhou Tan\",\"doi\":\"10.1109/ICICIP.2015.7388156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a discrete-time Zhang neural network (DTZNN) model, discretized from continuous-time Zhang neural network, is proposed and investigated for performing the online future minimization (OFM). In order to approximate more accurately the 1st-order derivative in computation and discretize more effectively the continuous-time Zhang neural network, a new Taylor-type numerical differentiation formula, together with the optimal sampling-gap rule, is presented and utilized to obtain the Taylor-type DTZNN model. For comparison, Euler-type DTZNN model and Newton iteration, with an interesting link being found, are also presented. Moreover, theoretical results of stability and convergence are presented, which show that the steady-state residual errors of the presented Taylor-type DTZNN model, Euler-type DTZNN model and Newton iteration have a pattern of 0(t3), 0(t2) and 0(t), respectively, with t denoting the sampling gap. Numerical experimental results further substantiate the effectiveness and advantages of the Taylor-type DTZNN model for solving the OFM problem.\",\"PeriodicalId\":265426,\"journal\":{\"name\":\"2015 Sixth International Conference on Intelligent Control and Information Processing (ICICIP)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Sixth International Conference on Intelligent Control and Information Processing (ICICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICIP.2015.7388156\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Sixth International Conference on Intelligent Control and Information Processing (ICICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIP.2015.7388156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
New DTZNN model for future minimization with cube steady-state error pattern using Taylor finite-difference formula
In this paper, a discrete-time Zhang neural network (DTZNN) model, discretized from continuous-time Zhang neural network, is proposed and investigated for performing the online future minimization (OFM). In order to approximate more accurately the 1st-order derivative in computation and discretize more effectively the continuous-time Zhang neural network, a new Taylor-type numerical differentiation formula, together with the optimal sampling-gap rule, is presented and utilized to obtain the Taylor-type DTZNN model. For comparison, Euler-type DTZNN model and Newton iteration, with an interesting link being found, are also presented. Moreover, theoretical results of stability and convergence are presented, which show that the steady-state residual errors of the presented Taylor-type DTZNN model, Euler-type DTZNN model and Newton iteration have a pattern of 0(t3), 0(t2) and 0(t), respectively, with t denoting the sampling gap. Numerical experimental results further substantiate the effectiveness and advantages of the Taylor-type DTZNN model for solving the OFM problem.