{"title":"高动态范围图像重建的双流全局引导学习","authors":"Junjie Lian, Yongfang Wang, Chuang Wang","doi":"10.1109/VCIP47243.2019.8965798","DOIUrl":null,"url":null,"abstract":"High dynamic range (HDR) images capture the luminance information of the real world and have more detailed information than low dynamic range (LDR) images. In this paper, we propose a dual-streams global guided end-to-end learning method to reconstruct HDR image from a single LDR input that combines both global information and local image features. In our framework, global features and local features are separately learned in dual-streams branches. In the reconstructed phase, we use a fusion layer to fuse them so that the global features can guide the local features to better reconstruct the HDR image. Furthermore, we design mixed loss function including multi-scale pixel-wise loss, color similarity loss and gradient loss to jointly train our network. Comparative experiments are carried out with other state-of-the-art methods and our method achieves superior performance.","PeriodicalId":388109,"journal":{"name":"2019 IEEE Visual Communications and Image Processing (VCIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Dual-Streams Global Guided Learning for High Dynamic Range Image Reconstruction\",\"authors\":\"Junjie Lian, Yongfang Wang, Chuang Wang\",\"doi\":\"10.1109/VCIP47243.2019.8965798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High dynamic range (HDR) images capture the luminance information of the real world and have more detailed information than low dynamic range (LDR) images. In this paper, we propose a dual-streams global guided end-to-end learning method to reconstruct HDR image from a single LDR input that combines both global information and local image features. In our framework, global features and local features are separately learned in dual-streams branches. In the reconstructed phase, we use a fusion layer to fuse them so that the global features can guide the local features to better reconstruct the HDR image. Furthermore, we design mixed loss function including multi-scale pixel-wise loss, color similarity loss and gradient loss to jointly train our network. Comparative experiments are carried out with other state-of-the-art methods and our method achieves superior performance.\",\"PeriodicalId\":388109,\"journal\":{\"name\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP47243.2019.8965798\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP47243.2019.8965798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dual-Streams Global Guided Learning for High Dynamic Range Image Reconstruction
High dynamic range (HDR) images capture the luminance information of the real world and have more detailed information than low dynamic range (LDR) images. In this paper, we propose a dual-streams global guided end-to-end learning method to reconstruct HDR image from a single LDR input that combines both global information and local image features. In our framework, global features and local features are separately learned in dual-streams branches. In the reconstructed phase, we use a fusion layer to fuse them so that the global features can guide the local features to better reconstruct the HDR image. Furthermore, we design mixed loss function including multi-scale pixel-wise loss, color similarity loss and gradient loss to jointly train our network. Comparative experiments are carried out with other state-of-the-art methods and our method achieves superior performance.