{"title":"结合自监督模型特征的视觉识别,用于使用未标记的数据","authors":"G. Díaz, O. Nicolis, B. Peralta","doi":"10.1109/ICAACCA51523.2021.9465233","DOIUrl":null,"url":null,"abstract":"Automatic visual object recognition has gained great popularity in the world and is successfully applied to various areas such as robotics, security or commerce using deep learning techniques. Training in machine learning models based on deep learning requires an enormous amount of supervised data, which is expensive to obtain. An alternative is to use semi-supervised models as co-training where the views given by deep networks are differentiated using models that incorporate lateral information from each training object. In this document, we describe and test a co-training model for deep networks, adding as auxiliary inputs to self-supervised network features. The results show that the proposed model managed to converge using a few dozen iterations, exceeding 2 % in precision compared to recent models. This model, despite its simplicity, manages to be competitive with more complex recent works. As future work, we plan to modify deep self-supervised networks to increase diversity in co-training learning.","PeriodicalId":328922,"journal":{"name":"2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual recognition incorporating features of self-supervised models for the use of unlabelled data\",\"authors\":\"G. Díaz, O. Nicolis, B. Peralta\",\"doi\":\"10.1109/ICAACCA51523.2021.9465233\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic visual object recognition has gained great popularity in the world and is successfully applied to various areas such as robotics, security or commerce using deep learning techniques. Training in machine learning models based on deep learning requires an enormous amount of supervised data, which is expensive to obtain. An alternative is to use semi-supervised models as co-training where the views given by deep networks are differentiated using models that incorporate lateral information from each training object. In this document, we describe and test a co-training model for deep networks, adding as auxiliary inputs to self-supervised network features. The results show that the proposed model managed to converge using a few dozen iterations, exceeding 2 % in precision compared to recent models. This model, despite its simplicity, manages to be competitive with more complex recent works. As future work, we plan to modify deep self-supervised networks to increase diversity in co-training learning.\",\"PeriodicalId\":328922,\"journal\":{\"name\":\"2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAACCA51523.2021.9465233\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAACCA51523.2021.9465233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual recognition incorporating features of self-supervised models for the use of unlabelled data
Automatic visual object recognition has gained great popularity in the world and is successfully applied to various areas such as robotics, security or commerce using deep learning techniques. Training in machine learning models based on deep learning requires an enormous amount of supervised data, which is expensive to obtain. An alternative is to use semi-supervised models as co-training where the views given by deep networks are differentiated using models that incorporate lateral information from each training object. In this document, we describe and test a co-training model for deep networks, adding as auxiliary inputs to self-supervised network features. The results show that the proposed model managed to converge using a few dozen iterations, exceeding 2 % in precision compared to recent models. This model, despite its simplicity, manages to be competitive with more complex recent works. As future work, we plan to modify deep self-supervised networks to increase diversity in co-training learning.