{"title":"具有临时精确解的自监督学习:线性投影","authors":"Evrim Ozmermer, Qiang Li","doi":"10.1109/INDIN51400.2023.10217918","DOIUrl":null,"url":null,"abstract":"Self-supervised learning has emerged as a promising method for training neural networks without needing annotated data. In this paper, we present a self-supervised learning method for training, not limited to but especially visual transformers that are able to learn meaningful representations of images and videos without requiring large amounts of labeled data. Our method is based on using exact solutions of the representations that the model generates. It is shown that the model is able to learn useful features that can be later fine-tuned on industrial downstream tasks. We demonstrate the effectiveness of our method on a subset of the Universal Image Embeddings 130k dataset [1], a private industrial Pill Identification dataset, and standard Cifar-10 dataset [20]. We show that our method outperforms solid baselines which are BYOL [2] and Barlow Twins [3] while using fewer parameters and resources. We show the capability of the trained model on a Deep Metric Learning task by comparing the Swin Transformer [4] backbones that are trained with our method, BYOL [2], and Barlow Twins [3]. The results also show that the proposed method achieves higher accuracy than others in pre-training and fine-tuning processes with fewer parameters. GitHub: https://github.com/rootvisionai/solo-learn.","PeriodicalId":174443,"journal":{"name":"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Self-supervised Learning with Temporary Exact Solutions: Linear Projection\",\"authors\":\"Evrim Ozmermer, Qiang Li\",\"doi\":\"10.1109/INDIN51400.2023.10217918\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-supervised learning has emerged as a promising method for training neural networks without needing annotated data. In this paper, we present a self-supervised learning method for training, not limited to but especially visual transformers that are able to learn meaningful representations of images and videos without requiring large amounts of labeled data. Our method is based on using exact solutions of the representations that the model generates. It is shown that the model is able to learn useful features that can be later fine-tuned on industrial downstream tasks. We demonstrate the effectiveness of our method on a subset of the Universal Image Embeddings 130k dataset [1], a private industrial Pill Identification dataset, and standard Cifar-10 dataset [20]. We show that our method outperforms solid baselines which are BYOL [2] and Barlow Twins [3] while using fewer parameters and resources. We show the capability of the trained model on a Deep Metric Learning task by comparing the Swin Transformer [4] backbones that are trained with our method, BYOL [2], and Barlow Twins [3]. The results also show that the proposed method achieves higher accuracy than others in pre-training and fine-tuning processes with fewer parameters. GitHub: https://github.com/rootvisionai/solo-learn.\",\"PeriodicalId\":174443,\"journal\":{\"name\":\"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INDIN51400.2023.10217918\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDIN51400.2023.10217918","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-supervised Learning with Temporary Exact Solutions: Linear Projection
Self-supervised learning has emerged as a promising method for training neural networks without needing annotated data. In this paper, we present a self-supervised learning method for training, not limited to but especially visual transformers that are able to learn meaningful representations of images and videos without requiring large amounts of labeled data. Our method is based on using exact solutions of the representations that the model generates. It is shown that the model is able to learn useful features that can be later fine-tuned on industrial downstream tasks. We demonstrate the effectiveness of our method on a subset of the Universal Image Embeddings 130k dataset [1], a private industrial Pill Identification dataset, and standard Cifar-10 dataset [20]. We show that our method outperforms solid baselines which are BYOL [2] and Barlow Twins [3] while using fewer parameters and resources. We show the capability of the trained model on a Deep Metric Learning task by comparing the Swin Transformer [4] backbones that are trained with our method, BYOL [2], and Barlow Twins [3]. The results also show that the proposed method achieves higher accuracy than others in pre-training and fine-tuning processes with fewer parameters. GitHub: https://github.com/rootvisionai/solo-learn.