Matthew L. Weiss, R. Paffenroth, J. Whitehill, J. Uzarski
{"title":"Deep Learning with Domain Randomization for Optimal Filtering","authors":"Matthew L. Weiss, R. Paffenroth, J. Whitehill, J. Uzarski","doi":"10.1109/ICMLA.2019.00288","DOIUrl":null,"url":null,"abstract":"Filtering is the process of recovering a signal, x(t), from noisy measurements z(t). One common filter is the Kalman Filter, which is proven to be the conditional minimum variance estimator of x(t) when the measurements are a Gaussian random processes. However, in practice (a) the measurements are not necessarily Gaussian and (b) an estimation of the measurement covariance is problematic and often tuned using cross-validation and domain knowledge. In order to address the Kalman Filter's suboptimal performance in situations where non-Gaussian noise is present, we train a deep autoencoder to learn a mapping of the noisy measurements to a \"learned\" noise process covariance R(t) and measurements z(t), which are passed to a Kalman Filter. The Kalman Filter's output is then mapped to the original measurement space via the decoder portion of the autoencoder, where this output and original measurements are compared. Training this autoencoder-Kalman Filter (AEKF) via domain randomization on simulated noisy sensor responses, we show the AEKF's estimate of x(t), in the majority of cases, achieves a lower MSE than both a standard Kalman Filter and a Long Short-Term Memory (LSTM) recurrent deep neural network. Most significantly, the AEKF outperforms all methods in the case of simulated Cauchy noise.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Filtering is the process of recovering a signal, x(t), from noisy measurements z(t). One common filter is the Kalman Filter, which is proven to be the conditional minimum variance estimator of x(t) when the measurements are a Gaussian random processes. However, in practice (a) the measurements are not necessarily Gaussian and (b) an estimation of the measurement covariance is problematic and often tuned using cross-validation and domain knowledge. In order to address the Kalman Filter's suboptimal performance in situations where non-Gaussian noise is present, we train a deep autoencoder to learn a mapping of the noisy measurements to a "learned" noise process covariance R(t) and measurements z(t), which are passed to a Kalman Filter. The Kalman Filter's output is then mapped to the original measurement space via the decoder portion of the autoencoder, where this output and original measurements are compared. Training this autoencoder-Kalman Filter (AEKF) via domain randomization on simulated noisy sensor responses, we show the AEKF's estimate of x(t), in the majority of cases, achieves a lower MSE than both a standard Kalman Filter and a Long Short-Term Memory (LSTM) recurrent deep neural network. Most significantly, the AEKF outperforms all methods in the case of simulated Cauchy noise.