{"title":"Video color grading via deep neural networks","authors":"J. Gibbs","doi":"10.33965/IJCSIS_2018130201","DOIUrl":null,"url":null,"abstract":"The task of color grading (or color correction) for film and video is significant and complex, involving aesthetic and technical decisions that require a trained operator and a good deal of time. In order to determine whether deep neural networks are capable of learning this complex aesthetic task, we compare two network frameworks—a classification network, and a conditional generative adversarial network, or cGAN—examining the quality and consistency of their output as potential automated solutions to color correction. Results are very good for both networks, though each exhibits problem areas. The classification network has issues with generalizing due to the need to collect and especially to label all data being used to train it. The cGAN on the other hand can use unlabeled data, which is much easier to collect. While the classification network does not directly affect images, only identifying image problems, the cGAN, creates a new image, introducing potential image degradation in the process; thus multiple adjustments to the network need to be made to create high quality output. We find that the data labeling issue for the classification network is a less tractable problem than the image correction and continuity issues discovered with the cGAN method, which have direct solutions. Thus we conclude the cGAN is the more promising network with which to automate color correction and grading.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"44 1","pages":""},"PeriodicalIF":0.2000,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IADIS-International Journal on Computer Science and Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33965/IJCSIS_2018130201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 1
Abstract
The task of color grading (or color correction) for film and video is significant and complex, involving aesthetic and technical decisions that require a trained operator and a good deal of time. In order to determine whether deep neural networks are capable of learning this complex aesthetic task, we compare two network frameworks—a classification network, and a conditional generative adversarial network, or cGAN—examining the quality and consistency of their output as potential automated solutions to color correction. Results are very good for both networks, though each exhibits problem areas. The classification network has issues with generalizing due to the need to collect and especially to label all data being used to train it. The cGAN on the other hand can use unlabeled data, which is much easier to collect. While the classification network does not directly affect images, only identifying image problems, the cGAN, creates a new image, introducing potential image degradation in the process; thus multiple adjustments to the network need to be made to create high quality output. We find that the data labeling issue for the classification network is a less tractable problem than the image correction and continuity issues discovered with the cGAN method, which have direct solutions. Thus we conclude the cGAN is the more promising network with which to automate color correction and grading.