{"title":"使用成本效益高的主动学习的对象注释","authors":"Nuh Hatipoglu, Esra Çinar, H. K. Ekenel","doi":"10.1109/UBMK52708.2021.9559028","DOIUrl":null,"url":null,"abstract":"Deep learning models require large amount of training data to reach high accuracies. However, labeling large volumes of training data is a labor-intensive and time-consuming process. Active learning is an approach that seeks to maximize the performance of a model with the least possible amount of labeled data. It is of great practical importance to develop a framework by combining deep learning and active learning methods that transfer features from a small number of unlabeled training data to classifiers. With this study, we combine active learning and deep learning models, which allows for fine-tuning deep learning models with a small number of training data. We use images of shelf products belonging to the same product group with 13 classes and examine them using different deep learning classifier models. Experimental results show that we are able to achieve higher performance by annotating and using a part of the data for training compared to annotating and using the entire dataset. This way, we save from the annotations costs, and at the same time reach an improved object classification system.","PeriodicalId":106516,"journal":{"name":"2021 6th International Conference on Computer Science and Engineering (UBMK)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Object Annotation Using Cost-Effective Active Learning\",\"authors\":\"Nuh Hatipoglu, Esra Çinar, H. K. Ekenel\",\"doi\":\"10.1109/UBMK52708.2021.9559028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models require large amount of training data to reach high accuracies. However, labeling large volumes of training data is a labor-intensive and time-consuming process. Active learning is an approach that seeks to maximize the performance of a model with the least possible amount of labeled data. It is of great practical importance to develop a framework by combining deep learning and active learning methods that transfer features from a small number of unlabeled training data to classifiers. With this study, we combine active learning and deep learning models, which allows for fine-tuning deep learning models with a small number of training data. We use images of shelf products belonging to the same product group with 13 classes and examine them using different deep learning classifier models. Experimental results show that we are able to achieve higher performance by annotating and using a part of the data for training compared to annotating and using the entire dataset. This way, we save from the annotations costs, and at the same time reach an improved object classification system.\",\"PeriodicalId\":106516,\"journal\":{\"name\":\"2021 6th International Conference on Computer Science and Engineering (UBMK)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 6th International Conference on Computer Science and Engineering (UBMK)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UBMK52708.2021.9559028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK52708.2021.9559028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Object Annotation Using Cost-Effective Active Learning
Deep learning models require large amount of training data to reach high accuracies. However, labeling large volumes of training data is a labor-intensive and time-consuming process. Active learning is an approach that seeks to maximize the performance of a model with the least possible amount of labeled data. It is of great practical importance to develop a framework by combining deep learning and active learning methods that transfer features from a small number of unlabeled training data to classifiers. With this study, we combine active learning and deep learning models, which allows for fine-tuning deep learning models with a small number of training data. We use images of shelf products belonging to the same product group with 13 classes and examine them using different deep learning classifier models. Experimental results show that we are able to achieve higher performance by annotating and using a part of the data for training compared to annotating and using the entire dataset. This way, we save from the annotations costs, and at the same time reach an improved object classification system.