{"title":"Making Time Series Embeddings More Interpretable in Deep Learning - Extracting Higher-Level Features via Symbolic Approximation Representations","authors":"Leonid Schwenke, Martin Atzmueller","doi":"10.32473/flairs.36.133107","DOIUrl":null,"url":null,"abstract":"With the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time series embeddings applied in deep learning models more interpretable using higher-level features in symbolic form. For that, we investigate two different approaches for extracting symbolic approximation representations regarding the frequency and the trend information, i.e. the Symbolic Fourier Approximation (SFA) and the Symbolic Aggregate approXimation (SAX). In particular, we analyze and discuss the impact of applying the different representation approaches. Furthermore, in our experimentation, we apply a state-of-the-art Transformer model to demonstrate the efficacy of the proposed approach regarding explainability in a comprehensive evaluation using a large set of time series datasets.","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time series embeddings applied in deep learning models more interpretable using higher-level features in symbolic form. For that, we investigate two different approaches for extracting symbolic approximation representations regarding the frequency and the trend information, i.e. the Symbolic Fourier Approximation (SFA) and the Symbolic Aggregate approXimation (SAX). In particular, we analyze and discuss the impact of applying the different representation approaches. Furthermore, in our experimentation, we apply a state-of-the-art Transformer model to demonstrate the efficacy of the proposed approach regarding explainability in a comprehensive evaluation using a large set of time series datasets.