{"title":"Self-attentive mechanism-based supervised comparative learning","authors":"Chaoxiang Si","doi":"10.1109/ISAIEE57420.2022.00048","DOIUrl":null,"url":null,"abstract":"To address the intra-class diversity and inter-class similarity issues in traditional contrast learning, this paper proposes a supervised contrast learning based on a self-attentive mechanism that can effectively increase the feature extraction ability. The proposed method consists of two stages: feature encoder pre-training and linear classifier fine-tuning. In the feature encoder pre-training phase, the supervised contrast loss exploits the labeling information of the data to minimize the distance between similar images in the embedding space and maximize features of different categories as far away as possible, enhancing the effect of contrast learning. Beyond that, the self-attentive mechanism-based block is introduced in the encoder module to explicitly build the interdependence between the convolutional feature channels and further improve the feature learning capability of the model. In the linear classifier fine-tuning stage, parameters of pre-trained encoder are fixed and only the classifier is fine tuned for the downstream classification task. Experiments on the CIFAR-10 and CIFAR-100 datasets demonstrate the superior of our proposed method.","PeriodicalId":345703,"journal":{"name":"2022 International Symposium on Advances in Informatics, Electronics and Education (ISAIEE)","volume":"39 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Advances in Informatics, Electronics and Education (ISAIEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISAIEE57420.2022.00048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To address the intra-class diversity and inter-class similarity issues in traditional contrast learning, this paper proposes a supervised contrast learning based on a self-attentive mechanism that can effectively increase the feature extraction ability. The proposed method consists of two stages: feature encoder pre-training and linear classifier fine-tuning. In the feature encoder pre-training phase, the supervised contrast loss exploits the labeling information of the data to minimize the distance between similar images in the embedding space and maximize features of different categories as far away as possible, enhancing the effect of contrast learning. Beyond that, the self-attentive mechanism-based block is introduced in the encoder module to explicitly build the interdependence between the convolutional feature channels and further improve the feature learning capability of the model. In the linear classifier fine-tuning stage, parameters of pre-trained encoder are fixed and only the classifier is fine tuned for the downstream classification task. Experiments on the CIFAR-10 and CIFAR-100 datasets demonstrate the superior of our proposed method.