Deep Learning with Domain Randomization for Optimal Filtering

Matthew L. Weiss, R. Paffenroth, J. Whitehill, J. Uzarski
{"title":"Deep Learning with Domain Randomization for Optimal Filtering","authors":"Matthew L. Weiss, R. Paffenroth, J. Whitehill, J. Uzarski","doi":"10.1109/ICMLA.2019.00288","DOIUrl":null,"url":null,"abstract":"Filtering is the process of recovering a signal, x(t), from noisy measurements z(t). One common filter is the Kalman Filter, which is proven to be the conditional minimum variance estimator of x(t) when the measurements are a Gaussian random processes. However, in practice (a) the measurements are not necessarily Gaussian and (b) an estimation of the measurement covariance is problematic and often tuned using cross-validation and domain knowledge. In order to address the Kalman Filter's suboptimal performance in situations where non-Gaussian noise is present, we train a deep autoencoder to learn a mapping of the noisy measurements to a \"learned\" noise process covariance R(t) and measurements z(t), which are passed to a Kalman Filter. The Kalman Filter's output is then mapped to the original measurement space via the decoder portion of the autoencoder, where this output and original measurements are compared. Training this autoencoder-Kalman Filter (AEKF) via domain randomization on simulated noisy sensor responses, we show the AEKF's estimate of x(t), in the majority of cases, achieves a lower MSE than both a standard Kalman Filter and a Long Short-Term Memory (LSTM) recurrent deep neural network. Most significantly, the AEKF outperforms all methods in the case of simulated Cauchy noise.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Filtering is the process of recovering a signal, x(t), from noisy measurements z(t). One common filter is the Kalman Filter, which is proven to be the conditional minimum variance estimator of x(t) when the measurements are a Gaussian random processes. However, in practice (a) the measurements are not necessarily Gaussian and (b) an estimation of the measurement covariance is problematic and often tuned using cross-validation and domain knowledge. In order to address the Kalman Filter's suboptimal performance in situations where non-Gaussian noise is present, we train a deep autoencoder to learn a mapping of the noisy measurements to a "learned" noise process covariance R(t) and measurements z(t), which are passed to a Kalman Filter. The Kalman Filter's output is then mapped to the original measurement space via the decoder portion of the autoencoder, where this output and original measurements are compared. Training this autoencoder-Kalman Filter (AEKF) via domain randomization on simulated noisy sensor responses, we show the AEKF's estimate of x(t), in the majority of cases, achieves a lower MSE than both a standard Kalman Filter and a Long Short-Term Memory (LSTM) recurrent deep neural network. Most significantly, the AEKF outperforms all methods in the case of simulated Cauchy noise.
基于领域随机化的深度学习最优过滤
滤波是从噪声测量值z(t)中恢复信号x(t)的过程。一种常见的滤波器是卡尔曼滤波器,当测量是高斯随机过程时,它被证明是x(t)的条件最小方差估计器。然而,在实践中(a)测量不一定是高斯的,(b)测量协方差的估计是有问题的,经常使用交叉验证和领域知识进行调整。为了解决卡尔曼滤波器在存在非高斯噪声的情况下的次优性能,我们训练了一个深度自编码器来学习噪声测量到“学习”噪声过程协方差R(t)和测量z(t)的映射,这些测量被传递给卡尔曼滤波器。然后,卡尔曼滤波器的输出通过自动编码器的解码器部分映射到原始测量空间,在此输出和原始测量进行比较。通过对模拟噪声传感器响应的域随机化训练这种自编码器-卡尔曼滤波器(AEKF),我们发现在大多数情况下,AEKF对x(t)的估计比标准卡尔曼滤波器和长短期记忆(LSTM)循环深度神经网络的MSE都要低。最重要的是,在模拟柯西噪声的情况下,AEKF优于所有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信