{"title":"基于端到端网络的多照度估计","authors":"Shen Yan, Feiyue Peng, Hanlin Tan, Shiming Lai, Maojun Zhang","doi":"10.1109/ICIVC.2018.8492879","DOIUrl":null,"url":null,"abstract":"Most popular color constancy algorithms assume that the light source obeys a uniform distribution across the scene. However, in the real world, the illuminations can vary a lot according to their spatial distribution. To overcome this problem, in this paper, we adopt a method based on a full end-to-end deep neural model to directly learn a mapping from the original image to the corresponding well-colored image. With this formulation, the network is able to determine pixel-wise illumination and produce a final visually compelling image. The training and evaluation of the network were performed on a standard dataset of two-dominant-illuminants. In this dataset, this approach achieves state-of-the-art performance. Besides, the main architecture of the network simply consists of a stack of fully convolutional blocks which can take the input of arbitrary size and produce correspondingly-sized output with effective learning. The experimental result shows that our customized loss function can help to reach a better performance than simply using MSE.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Multiple Illumination Estimation with End-to-End Network\",\"authors\":\"Shen Yan, Feiyue Peng, Hanlin Tan, Shiming Lai, Maojun Zhang\",\"doi\":\"10.1109/ICIVC.2018.8492879\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most popular color constancy algorithms assume that the light source obeys a uniform distribution across the scene. However, in the real world, the illuminations can vary a lot according to their spatial distribution. To overcome this problem, in this paper, we adopt a method based on a full end-to-end deep neural model to directly learn a mapping from the original image to the corresponding well-colored image. With this formulation, the network is able to determine pixel-wise illumination and produce a final visually compelling image. The training and evaluation of the network were performed on a standard dataset of two-dominant-illuminants. In this dataset, this approach achieves state-of-the-art performance. Besides, the main architecture of the network simply consists of a stack of fully convolutional blocks which can take the input of arbitrary size and produce correspondingly-sized output with effective learning. The experimental result shows that our customized loss function can help to reach a better performance than simply using MSE.\",\"PeriodicalId\":173981,\"journal\":{\"name\":\"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIVC.2018.8492879\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIVC.2018.8492879","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multiple Illumination Estimation with End-to-End Network
Most popular color constancy algorithms assume that the light source obeys a uniform distribution across the scene. However, in the real world, the illuminations can vary a lot according to their spatial distribution. To overcome this problem, in this paper, we adopt a method based on a full end-to-end deep neural model to directly learn a mapping from the original image to the corresponding well-colored image. With this formulation, the network is able to determine pixel-wise illumination and produce a final visually compelling image. The training and evaluation of the network were performed on a standard dataset of two-dominant-illuminants. In this dataset, this approach achieves state-of-the-art performance. Besides, the main architecture of the network simply consists of a stack of fully convolutional blocks which can take the input of arbitrary size and produce correspondingly-sized output with effective learning. The experimental result shows that our customized loss function can help to reach a better performance than simply using MSE.