Deep Subspace Encoders for Nonlinear System Identification

G. Beintema, M. Schoukens, R. T'oth
{"title":"Deep Subspace Encoders for Nonlinear System Identification","authors":"G. Beintema, M. Schoukens, R. T'oth","doi":"10.48550/arXiv.2210.14816","DOIUrl":null,"url":null,"abstract":"Using Artificial Neural Networks (ANN) for nonlinear system identification has proven to be a promising approach, but despite of all recent research efforts, many practical and theoretical problems still remain open. Specifically, noise handling and models, issues of consistency and reliable estimation under minimisation of the prediction error are the most severe problems. The latter comes with numerous practical challenges such as explosion of the computational cost in terms of the number of data samples and the occurrence of instabilities during optimization. In this paper, we aim to overcome these issues by proposing a method which uses a truncated prediction loss and a subspace encoder for state estimation. The truncated prediction loss is computed by selecting multiple truncated subsections from the time series and computing the average prediction loss. To obtain a computationally efficient estimation method that minimizes the truncated prediction loss, a subspace encoder represented by an artificial neural network is introduced. This encoder aims to approximate the state reconstructability map of the estimated model to provide an initial state for each truncated subsection given past inputs and outputs. By theoretical analysis, we show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency by allowing for overlap between the subsections. Lastly, we provide practical insights and user guidelines employing a numerical example and state-of-the-art benchmark results.","PeriodicalId":13196,"journal":{"name":"IEEE Robotics Autom. Mag.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics Autom. Mag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.14816","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Using Artificial Neural Networks (ANN) for nonlinear system identification has proven to be a promising approach, but despite of all recent research efforts, many practical and theoretical problems still remain open. Specifically, noise handling and models, issues of consistency and reliable estimation under minimisation of the prediction error are the most severe problems. The latter comes with numerous practical challenges such as explosion of the computational cost in terms of the number of data samples and the occurrence of instabilities during optimization. In this paper, we aim to overcome these issues by proposing a method which uses a truncated prediction loss and a subspace encoder for state estimation. The truncated prediction loss is computed by selecting multiple truncated subsections from the time series and computing the average prediction loss. To obtain a computationally efficient estimation method that minimizes the truncated prediction loss, a subspace encoder represented by an artificial neural network is introduced. This encoder aims to approximate the state reconstructability map of the estimated model to provide an initial state for each truncated subsection given past inputs and outputs. By theoretical analysis, we show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency by allowing for overlap between the subsections. Lastly, we provide practical insights and user guidelines employing a numerical example and state-of-the-art benchmark results.
非线性系统辨识的深子空间编码器
使用人工神经网络(ANN)进行非线性系统辨识已被证明是一种很有前途的方法,但尽管最近的研究努力,许多实际和理论问题仍然存在。具体来说,噪声处理和模型、一致性问题和预测误差最小化下的可靠估计是最严重的问题。后者带来了许多实际挑战,例如数据样本数量的计算成本激增以及优化过程中不稳定的发生。在本文中,我们的目标是通过提出一种使用截断预测损失和子空间编码器进行状态估计的方法来克服这些问题。截断的预测损失是通过从时间序列中选择多个截断的子段并计算平均预测损失来计算的。为了获得一种计算效率高且预测损失最小的估计方法,引入了一种以人工神经网络为代表的子空间编码器。该编码器旨在近似估计模型的状态重构映射,为给定过去输入和输出的每个截断分段提供初始状态。通过理论分析,我们表明,在温和的条件下,所提出的方法是局部一致的,增加了优化稳定性,并通过允许子段之间的重叠来提高数据效率。最后,我们提供了实用的见解和用户指南,采用数值示例和最先进的基准测试结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信