{"title":"通过学习相机之间的颜色映射参数来增强颜色数据","authors":"Chanachai Puttaruksa, Pinyo Taeprasartsit","doi":"10.1109/JCSSE.2018.8457355","DOIUrl":null,"url":null,"abstract":"In order to achieve a more accurate deep learning model, we need large amount of data. For imaging application, color data augmentation is usually required. Color jittering is a common current practice for such augmentation where color values in image are slightly adjusted. Unfortunately, color values between two cameras may be significantly different. This makes the current practice ineffective. This work proposes to map color values among cameras by using deep learning to learn color-mapping parameters. In this way, we can augment color data by converting an image from one camera to another image whose colors seemingly are taken from another camera. This allows a machine to learn a model that can deal with input images from multiple cameras without actually using training data from multiple cameras. These parameters can also be employed to calibrate colors in order that all cameras produce the same color tone. The proposed neural network architecture which employs fully connected layers and batch normalization outperforms an existing method and can be systematically performed for any camera pairs to extend its applications in other scenarios.","PeriodicalId":338973,"journal":{"name":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Color Data Augmentation through Learning Color-Mapping Parameters between Cameras\",\"authors\":\"Chanachai Puttaruksa, Pinyo Taeprasartsit\",\"doi\":\"10.1109/JCSSE.2018.8457355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to achieve a more accurate deep learning model, we need large amount of data. For imaging application, color data augmentation is usually required. Color jittering is a common current practice for such augmentation where color values in image are slightly adjusted. Unfortunately, color values between two cameras may be significantly different. This makes the current practice ineffective. This work proposes to map color values among cameras by using deep learning to learn color-mapping parameters. In this way, we can augment color data by converting an image from one camera to another image whose colors seemingly are taken from another camera. This allows a machine to learn a model that can deal with input images from multiple cameras without actually using training data from multiple cameras. These parameters can also be employed to calibrate colors in order that all cameras produce the same color tone. The proposed neural network architecture which employs fully connected layers and batch normalization outperforms an existing method and can be systematically performed for any camera pairs to extend its applications in other scenarios.\",\"PeriodicalId\":338973,\"journal\":{\"name\":\"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)\",\"volume\":\"135 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/JCSSE.2018.8457355\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JCSSE.2018.8457355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Color Data Augmentation through Learning Color-Mapping Parameters between Cameras
In order to achieve a more accurate deep learning model, we need large amount of data. For imaging application, color data augmentation is usually required. Color jittering is a common current practice for such augmentation where color values in image are slightly adjusted. Unfortunately, color values between two cameras may be significantly different. This makes the current practice ineffective. This work proposes to map color values among cameras by using deep learning to learn color-mapping parameters. In this way, we can augment color data by converting an image from one camera to another image whose colors seemingly are taken from another camera. This allows a machine to learn a model that can deal with input images from multiple cameras without actually using training data from multiple cameras. These parameters can also be employed to calibrate colors in order that all cameras produce the same color tone. The proposed neural network architecture which employs fully connected layers and batch normalization outperforms an existing method and can be systematically performed for any camera pairs to extend its applications in other scenarios.