{"title":"A weakly-supervised deep domain adaptation method for multi-modal sensor data","authors":"R. Mihailescu","doi":"10.1109/gcaiot53516.2021.9693050","DOIUrl":null,"url":null,"abstract":"Nearly every real-world deployment of machine learning models suffers from some form of shift in data distributions in relation to the data encountered in production. This aspect is particularly pronounced when dealing with streaming data or in dynamic settings (e.g. changes in data sources, behaviour and the environment). As a result, the performance of the models degrades during deployment. In order to account for these contextual changes, domain adaptation techniques have been designed for scenarios where the aim is to learn a model from a source data distribution, which can perform well on a different, but related target data distribution.In this paper we introduce a variational autoencoder-based multi-modal approach for the task of domain adaptation, that can be trained on a large amount of labelled data from the source domain, coupled with a comparably small amount of labelled data from the target domain. We demonstrate our approach in the context of human activity recognition using various IoT sensing modalities and report superior results when benchmarking against the effective mSDA method for domain adaptation.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/gcaiot53516.2021.9693050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nearly every real-world deployment of machine learning models suffers from some form of shift in data distributions in relation to the data encountered in production. This aspect is particularly pronounced when dealing with streaming data or in dynamic settings (e.g. changes in data sources, behaviour and the environment). As a result, the performance of the models degrades during deployment. In order to account for these contextual changes, domain adaptation techniques have been designed for scenarios where the aim is to learn a model from a source data distribution, which can perform well on a different, but related target data distribution.In this paper we introduce a variational autoencoder-based multi-modal approach for the task of domain adaptation, that can be trained on a large amount of labelled data from the source domain, coupled with a comparably small amount of labelled data from the target domain. We demonstrate our approach in the context of human activity recognition using various IoT sensing modalities and report superior results when benchmarking against the effective mSDA method for domain adaptation.