Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li
{"title":"最大平均差异对抗性主动学习","authors":"Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li","doi":"10.1109/ICSPCC55723.2022.9984505","DOIUrl":null,"url":null,"abstract":"The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.","PeriodicalId":346917,"journal":{"name":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maximum Mean Discrepancy Adversarial Active Learning\",\"authors\":\"Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li\",\"doi\":\"10.1109/ICSPCC55723.2022.9984505\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.\",\"PeriodicalId\":346917,\"journal\":{\"name\":\"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSPCC55723.2022.9984505\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSPCC55723.2022.9984505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Maximum Mean Discrepancy Adversarial Active Learning
The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.