{"title":"通过一致性损失驱动的 CNN 训练实现特征空间分离","authors":"N. Ding , H. Arabian , K. Möller","doi":"10.1016/j.ifacsc.2024.100260","DOIUrl":null,"url":null,"abstract":"<div><p>Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.</p></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"28 ","pages":"Article 100260"},"PeriodicalIF":1.8000,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S246860182400021X/pdfft?md5=7ae999412c5f76db07310209ce438ec2&pid=1-s2.0-S246860182400021X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Feature space separation by conformity loss driven training of CNN\",\"authors\":\"N. Ding , H. Arabian , K. Möller\",\"doi\":\"10.1016/j.ifacsc.2024.100260\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.</p></div>\",\"PeriodicalId\":29926,\"journal\":{\"name\":\"IFAC Journal of Systems and Control\",\"volume\":\"28 \",\"pages\":\"Article 100260\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-04-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S246860182400021X/pdfft?md5=7ae999412c5f76db07310209ce438ec2&pid=1-s2.0-S246860182400021X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IFAC Journal of Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S246860182400021X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IFAC Journal of Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S246860182400021X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Feature space separation by conformity loss driven training of CNN
Convolutional neural networks (CNNs) have enabled tremendous achievements in image classification, as the model can automatically extract image features and assign a proper classification. Nevertheless, the classification is lacking robustness to — for humans’ invisible perturbations on the input. To improve the robustness of the CNN model, it is necessary to understand the decision-making procedure of CNN models. By inspecting the learned feature space, we found that the classification regions are not always clearly separated by the CNN model. The overlap of classification regions increases the possibility to less perturbation induced input changes on classification results. Therefore, the clear separation of feature spaces of the CNN model should support decision robustness. In this paper, we propose to use a novel loss function called “conformity loss” to strengthen disjoint feature spaces during learning at different layers of the CNN, in order to improve the intra-class compactness and inter-class differences in trained representations. The same function was used as an evaluation metric to measure the feature space separation during the testing process. In conclusion, the conformity loss driven trained model has shown better feature space separation at comparable output performance.