最大平均差异对抗性主动学习

Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li
{"title":"最大平均差异对抗性主动学习","authors":"Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li","doi":"10.1109/ICSPCC55723.2022.9984505","DOIUrl":null,"url":null,"abstract":"The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.","PeriodicalId":346917,"journal":{"name":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maximum Mean Discrepancy Adversarial Active Learning\",\"authors\":\"Mingzhi Cai, Baoguo Wei, Yuechen Zhang, Xu Li, Lixin Li\",\"doi\":\"10.1109/ICSPCC55723.2022.9984505\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.\",\"PeriodicalId\":346917,\"journal\":{\"name\":\"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSPCC55723.2022.9984505\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSPCC55723.2022.9984505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

主动学习的目的是减少采样成本。然而,神经网络模型给出的基于概率的不确定性方法是不可靠的,并且容易对离群样本做出过于自信的预测。在本文中,我们提出了一种基于最大平均差异对抗学习的主动学习策略。我们的方法利用训练过程中未标记样本的结构信息来估计它们与标记样本结构的关系,从而区分未标记样本和标记样本。此外,我们将IsoMax引入主动学习中,以使主动学习对异常值更加敏感,并缓解主动学习采样开始时对异常值的过度自信问题。该查询策略结合了不确定性标准和源域差异标准。在三个独立的图像分类数据集CIFAR10、SVHN和MNIST上,对该方法进行了评估。结果表明,我们的方法优于其他技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Maximum Mean Discrepancy Adversarial Active Learning
The aim of active learning is to reduce the sampling costs. However, the uncertainty approaches based on probability given by the neural network model is not reliable, and it is prone to make overconfident predictions for outlier samples. In this paper, we provide a maximum mean discrepancy adversarial learning-based active learning strategy. Our approach utilizes the structural information of unlabeled samples during training to estimate their relationship with the structure of labeled samples in order to distinguish unlabeled from labeled samples. In addition, we introduce IsoMax into active learning as a way to make active learning more sensitive to outliers and to alleviate the problem of overconfidence in outliers at the beginning of active learning sampling. The query strategy combines the criteria of uncertainty and source domain discrepancy. On three separate picture classification datasets, CIFAR10, SVHN, and MNIST, the approach is assessed. The outcomes demonstrate our approach's superiority over other techniques.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信