{"title":"基于风格迁移的图像数据增强方法","authors":"Yanyan Wei, Chuwei Li, Hangyu Li, Zhilong Zhang","doi":"10.1145/3549179.3549180","DOIUrl":null,"url":null,"abstract":"Because there aren't enough accessible images of military vehicles, overfitting is a common occurrence when using a detection model in the military sector. Besides, low-contrast military vehicles are more difficult to be spotted in the field. Therefore, we create a dataset of military vehicles that consists of a training set and two different test sets, and we suggest an efficient method for image data augmentation that is mostly based on style transfer. Specifically, the process of data augmentation contains targets mask generation, style transfer, and details addition, and doesn't need extra annotation work. In the experimental part, YOLO v5s is applied to verify the efficacy of our method. Our method enables us to improve the precisions by 0.101 and 0.134 in the high-contrast situation, and achieve the precisions of 0.729 and 0.515 in the low-contrast situation when using single-style stylized images dataset and multi-style stylized images dataset respectively, in experiments. The results suggest that our method can reduce overfitting and show a rather satisfactory performance on our self-made dataset.","PeriodicalId":105724,"journal":{"name":"Proceedings of the 2022 International Conference on Pattern Recognition and Intelligent Systems","volume":"79 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Image Data Augmentation Method based on Style Transfer\",\"authors\":\"Yanyan Wei, Chuwei Li, Hangyu Li, Zhilong Zhang\",\"doi\":\"10.1145/3549179.3549180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Because there aren't enough accessible images of military vehicles, overfitting is a common occurrence when using a detection model in the military sector. Besides, low-contrast military vehicles are more difficult to be spotted in the field. Therefore, we create a dataset of military vehicles that consists of a training set and two different test sets, and we suggest an efficient method for image data augmentation that is mostly based on style transfer. Specifically, the process of data augmentation contains targets mask generation, style transfer, and details addition, and doesn't need extra annotation work. In the experimental part, YOLO v5s is applied to verify the efficacy of our method. Our method enables us to improve the precisions by 0.101 and 0.134 in the high-contrast situation, and achieve the precisions of 0.729 and 0.515 in the low-contrast situation when using single-style stylized images dataset and multi-style stylized images dataset respectively, in experiments. The results suggest that our method can reduce overfitting and show a rather satisfactory performance on our self-made dataset.\",\"PeriodicalId\":105724,\"journal\":{\"name\":\"Proceedings of the 2022 International Conference on Pattern Recognition and Intelligent Systems\",\"volume\":\"79 2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 International Conference on Pattern Recognition and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3549179.3549180\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 International Conference on Pattern Recognition and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3549179.3549180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image Data Augmentation Method based on Style Transfer
Because there aren't enough accessible images of military vehicles, overfitting is a common occurrence when using a detection model in the military sector. Besides, low-contrast military vehicles are more difficult to be spotted in the field. Therefore, we create a dataset of military vehicles that consists of a training set and two different test sets, and we suggest an efficient method for image data augmentation that is mostly based on style transfer. Specifically, the process of data augmentation contains targets mask generation, style transfer, and details addition, and doesn't need extra annotation work. In the experimental part, YOLO v5s is applied to verify the efficacy of our method. Our method enables us to improve the precisions by 0.101 and 0.134 in the high-contrast situation, and achieve the precisions of 0.729 and 0.515 in the low-contrast situation when using single-style stylized images dataset and multi-style stylized images dataset respectively, in experiments. The results suggest that our method can reduce overfitting and show a rather satisfactory performance on our self-made dataset.