{"title":"ITF-VAE: Variational Auto-Encoder Using Interpretable Continuous Time Series Features","authors":"Hendrik Klopries;Andreas Schwung","doi":"10.1109/TAI.2025.3545396","DOIUrl":null,"url":null,"abstract":"Machine learning algorithms are driven by data. However, the quantity and quality of data in industries are limited due to multiple process constraints. Generating artificial data and performing a transfer learning task is a common solution to overcome these limitations. Recently, deep generative models have become one of the leading solutions for modeling a given source domain. The main hindrance to using those machine learning approaches is the lack of interpretability. Therefore, we present a novel variational autoencoder approach to generate time series data on a probabilistic latent feature representation and enhance interpretability within the generative model and the output trajectory. We sample selective and parameter values for certain continuous function candidates to assemble the synthetic time series. The sparse design of the generative model enables direct interpretability and matches an estimated posterior distribution of the detected components in the source domain. Through residual stacking, conditionality, and a mixture of prior distributions, we derive a stacked version of the evidence lower bound to learn our network. Tests on synthetic and real industrial datasets underline the performance and interpretability of our generative model. Depending on the model and function candidates, the user can define a trade-off between flexibility and interpretability. Overall, this work presents an innovative interpretable representation of the latent space and further developed evidence lower bound criterion driven by the designed architecture.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 8","pages":"2314-2326"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10902438/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning algorithms are driven by data. However, the quantity and quality of data in industries are limited due to multiple process constraints. Generating artificial data and performing a transfer learning task is a common solution to overcome these limitations. Recently, deep generative models have become one of the leading solutions for modeling a given source domain. The main hindrance to using those machine learning approaches is the lack of interpretability. Therefore, we present a novel variational autoencoder approach to generate time series data on a probabilistic latent feature representation and enhance interpretability within the generative model and the output trajectory. We sample selective and parameter values for certain continuous function candidates to assemble the synthetic time series. The sparse design of the generative model enables direct interpretability and matches an estimated posterior distribution of the detected components in the source domain. Through residual stacking, conditionality, and a mixture of prior distributions, we derive a stacked version of the evidence lower bound to learn our network. Tests on synthetic and real industrial datasets underline the performance and interpretability of our generative model. Depending on the model and function candidates, the user can define a trade-off between flexibility and interpretability. Overall, this work presents an innovative interpretable representation of the latent space and further developed evidence lower bound criterion driven by the designed architecture.