{"title":"Interpretable Latent Space for Meteorological Out-of-Distribution Detection via Weak Supervision","authors":"Suman Das, Michael Yuhas, Rachel Koh, A. Easwaran","doi":"10.1145/3651224","DOIUrl":null,"url":null,"abstract":"\n Deep neural networks (DNNs) are effective tools for learning-enabled cyber-physical systems (CPSs) that handle high-dimensional image data. However, DNNs may make incorrect decisions when presented with inputs outside the distribution of their training data. These inputs can compromise the safety of CPSs. So, it becomes crucial to detect inputs as out-of-distribution (OOD) and interpret the reasons for their classification as OOD. In this study, we propose an interpretable learning method to detect OOD caused by meteorological features like darkness, lightness, and rain. To achieve this, we employ a variational autoencoder (VAE) to map high-dimensional image data to a lower-dimensional latent space. We then focus on a specific latent dimension and encourage it to classify different intensities of a particular meteorological feature in a monotonically increasing manner. This is accomplished by incorporating two additional terms into the VAE’s loss function: a classification loss and a positional loss. During training, we optimize the utilization of label information for classification. Remarkably, our results demonstrate that using only\n \n \\(25\\% \\)\n \n of the training data labels is sufficient to train a single pre-selected latent dimension to classify different intensities of a specific meteorological feature. We evaluate the proposed method on two distinct datasets, CARLA and Duckietown, employing two different rain-generation methods. We show that our approach outperforms existing approaches by at least\n \n \\(15\\% \\)\n \n in the\n F1 score\n and\n precision\n when trained and tested on CARLA dataset.\n","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3651224","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNNs) are effective tools for learning-enabled cyber-physical systems (CPSs) that handle high-dimensional image data. However, DNNs may make incorrect decisions when presented with inputs outside the distribution of their training data. These inputs can compromise the safety of CPSs. So, it becomes crucial to detect inputs as out-of-distribution (OOD) and interpret the reasons for their classification as OOD. In this study, we propose an interpretable learning method to detect OOD caused by meteorological features like darkness, lightness, and rain. To achieve this, we employ a variational autoencoder (VAE) to map high-dimensional image data to a lower-dimensional latent space. We then focus on a specific latent dimension and encourage it to classify different intensities of a particular meteorological feature in a monotonically increasing manner. This is accomplished by incorporating two additional terms into the VAE’s loss function: a classification loss and a positional loss. During training, we optimize the utilization of label information for classification. Remarkably, our results demonstrate that using only
\(25\% \)
of the training data labels is sufficient to train a single pre-selected latent dimension to classify different intensities of a specific meteorological feature. We evaluate the proposed method on two distinct datasets, CARLA and Duckietown, employing two different rain-generation methods. We show that our approach outperforms existing approaches by at least
\(15\% \)
in the
F1 score
and
precision
when trained and tested on CARLA dataset.