Felix Kreutz, Daniel Scholz, Julian Hille, Huang Jiaxin, Florian Hauer, Klaus Knobloch, C. Mayr
{"title":"Continuous Inference of Time Recurrent Neural Networks for Field Oriented Control","authors":"Felix Kreutz, Daniel Scholz, Julian Hille, Huang Jiaxin, Florian Hauer, Klaus Knobloch, C. Mayr","doi":"10.1109/CAI54212.2023.00119","DOIUrl":null,"url":null,"abstract":"Deep recurrent networks can be computed as an unrolled computation graph in a defined time window. In theory, the unrolled network and a continuous time recurrent computation are equivalent. However, we encountered a shift in accuracy for models based on LSTM-/GRU- and SNN-cells during the switch from unrolled computation during training towards a continuous stateful inference without state resets. In this work, we evaluate these time recurrent neural network approaches based on the error created by using a time continuous inference. This error would be small in case of good time domain generalization and we can show that some training setups are favourable for that with the chosen example use case. A real time critical motor position prediction use case is chosen as a reference. This task can be phrased as a time series regression problem. A time continuous stateful inference for time recurrent neural networks benefits an embedded systems by reduced need of compute resources.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep recurrent networks can be computed as an unrolled computation graph in a defined time window. In theory, the unrolled network and a continuous time recurrent computation are equivalent. However, we encountered a shift in accuracy for models based on LSTM-/GRU- and SNN-cells during the switch from unrolled computation during training towards a continuous stateful inference without state resets. In this work, we evaluate these time recurrent neural network approaches based on the error created by using a time continuous inference. This error would be small in case of good time domain generalization and we can show that some training setups are favourable for that with the chosen example use case. A real time critical motor position prediction use case is chosen as a reference. This task can be phrased as a time series regression problem. A time continuous stateful inference for time recurrent neural networks benefits an embedded systems by reduced need of compute resources.