{"title":"Mining Connections Between Domains through Latent Space Mapping","authors":"Yingjing Lu","doi":"10.1109/ICDMW.2018.00157","DOIUrl":null,"url":null,"abstract":"Exploring ways to connect data is crucial to building knowledge graphs to associate data from different domains together. Humans, for example, can learn to associate flour with bread because bread is made of flour so that they can recall information of flour given a piece of bread even though bread and flour have few common features. In data mining, this ability can be translated to the way to connect images, texts, audios from different classes or domains together. Most works so far assume shared feature representations between domains we want to connect together. Another limitation yet to be improved is that for each defined mapping scheme, we often have to train a new model end-to-end among all sample data, which is often expensive. In this work, we present a model that aims to simultaneously address the two limitations. We use unconditionally trained Variational Autoencoders(VAEs) to project high dimensional data into the latent space and present a novel generative model that transfer latent representation of data from one domain to another by any custom schema. The model makes no assumption on any shared representation among different domains. The VAEs that encodes entire datasets, being the largest training overhead in this model, can be reused to support any new mapping schema without any retraining.","PeriodicalId":259600,"journal":{"name":"2018 IEEE International Conference on Data Mining Workshops (ICDMW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Data Mining Workshops (ICDMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW.2018.00157","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Exploring ways to connect data is crucial to building knowledge graphs to associate data from different domains together. Humans, for example, can learn to associate flour with bread because bread is made of flour so that they can recall information of flour given a piece of bread even though bread and flour have few common features. In data mining, this ability can be translated to the way to connect images, texts, audios from different classes or domains together. Most works so far assume shared feature representations between domains we want to connect together. Another limitation yet to be improved is that for each defined mapping scheme, we often have to train a new model end-to-end among all sample data, which is often expensive. In this work, we present a model that aims to simultaneously address the two limitations. We use unconditionally trained Variational Autoencoders(VAEs) to project high dimensional data into the latent space and present a novel generative model that transfer latent representation of data from one domain to another by any custom schema. The model makes no assumption on any shared representation among different domains. The VAEs that encodes entire datasets, being the largest training overhead in this model, can be reused to support any new mapping schema without any retraining.