{"title":"从很少的例子中学习来理解多媒体内容:无监督、半监督和监督深度学习方法的最新进展","authors":"Guo-Jun Qi","doi":"10.1145/3206025.3210498","DOIUrl":null,"url":null,"abstract":"In this tutorial, the speaker will present serval parallel efforts on building deep learning models with very few supervision information, with or without unsupervised data available. In particular, we will discuss in details. (1) Generative Adverbial Nets (GANs) and their applications to unsupervised feature extractions, semi-supervised learning with few labeled examples and a large amount of unlabeled data. We will discuss the state-of-the-art results that have been achieved by the semi-supervised GANs. (2) Low-Shot Learning algorithms to train and test models on disjoint sets of tasks. We will discuss the ideas of how to efficiently adapt models to tasks with very few examples. In particular, we will discuss several paradigms of learning-to-learn approaches. (3) We will also discuss how to transfer models across modalities by leveraging abundant labels from one modality to train a model for other modalities with few labels. We will discuss in details the cross-modal label transfer approach.","PeriodicalId":224132,"journal":{"name":"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimedia Content Understanding by Learning from Very Few Examples: Recent Progress on Unsupervised, Semi-Supervised and Supervised Deep Learning Approaches\",\"authors\":\"Guo-Jun Qi\",\"doi\":\"10.1145/3206025.3210498\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this tutorial, the speaker will present serval parallel efforts on building deep learning models with very few supervision information, with or without unsupervised data available. In particular, we will discuss in details. (1) Generative Adverbial Nets (GANs) and their applications to unsupervised feature extractions, semi-supervised learning with few labeled examples and a large amount of unlabeled data. We will discuss the state-of-the-art results that have been achieved by the semi-supervised GANs. (2) Low-Shot Learning algorithms to train and test models on disjoint sets of tasks. We will discuss the ideas of how to efficiently adapt models to tasks with very few examples. In particular, we will discuss several paradigms of learning-to-learn approaches. (3) We will also discuss how to transfer models across modalities by leveraging abundant labels from one modality to train a model for other modalities with few labels. We will discuss in details the cross-modal label transfer approach.\",\"PeriodicalId\":224132,\"journal\":{\"name\":\"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3206025.3210498\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3206025.3210498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multimedia Content Understanding by Learning from Very Few Examples: Recent Progress on Unsupervised, Semi-Supervised and Supervised Deep Learning Approaches
In this tutorial, the speaker will present serval parallel efforts on building deep learning models with very few supervision information, with or without unsupervised data available. In particular, we will discuss in details. (1) Generative Adverbial Nets (GANs) and their applications to unsupervised feature extractions, semi-supervised learning with few labeled examples and a large amount of unlabeled data. We will discuss the state-of-the-art results that have been achieved by the semi-supervised GANs. (2) Low-Shot Learning algorithms to train and test models on disjoint sets of tasks. We will discuss the ideas of how to efficiently adapt models to tasks with very few examples. In particular, we will discuss several paradigms of learning-to-learn approaches. (3) We will also discuss how to transfer models across modalities by leveraging abundant labels from one modality to train a model for other modalities with few labels. We will discuss in details the cross-modal label transfer approach.