Qinghan Jiang, Ying Huang, Su Liu, Zequan Wang, Tangsheng Li
{"title":"一种用于深度高动态范围成像的新型注意引导网络","authors":"Qinghan Jiang, Ying Huang, Su Liu, Zequan Wang, Tangsheng Li","doi":"10.1109/CTISC52352.2021.00069","DOIUrl":null,"url":null,"abstract":"In natural scenes with multi-exposure image fusion (MEF), high dynamic range (HDR) imaging is often affected by moving objects or misalignments in the scene, resulting in ghosting artifacts in the final imaging results, with the help of optical flow method and deep network architecture. To avoid ghosting artifacts better, we propose a novel attention- guided neural network (ADeepHDR) to produce high-quality ghost-free HDR images. Unlike the previous methods, we use the attention module to guide the process of image merging. The attention module can detect the large motions and the notable parts of the different input features and enhance details in the results. Based on the attention module, we also try different subnetwork variants to make full use of the hierarchical features to get more ideal results. Besides, fractional-oder differential convolution is used in the subnetwork variant to extract more detailed features. The proposed ADeepHDR is an improvement method without optical flows, which can better avoid the ghosting artifacts caused by error optical flow estimation and large motions. We have conducted extensive quantitative and qualitative assessments, and show that the proposed method is superior to the most state-of-the- art approaches.","PeriodicalId":268378,"journal":{"name":"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Attention-guided Network for Deep High Dynamic Range Imaging\",\"authors\":\"Qinghan Jiang, Ying Huang, Su Liu, Zequan Wang, Tangsheng Li\",\"doi\":\"10.1109/CTISC52352.2021.00069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In natural scenes with multi-exposure image fusion (MEF), high dynamic range (HDR) imaging is often affected by moving objects or misalignments in the scene, resulting in ghosting artifacts in the final imaging results, with the help of optical flow method and deep network architecture. To avoid ghosting artifacts better, we propose a novel attention- guided neural network (ADeepHDR) to produce high-quality ghost-free HDR images. Unlike the previous methods, we use the attention module to guide the process of image merging. The attention module can detect the large motions and the notable parts of the different input features and enhance details in the results. Based on the attention module, we also try different subnetwork variants to make full use of the hierarchical features to get more ideal results. Besides, fractional-oder differential convolution is used in the subnetwork variant to extract more detailed features. The proposed ADeepHDR is an improvement method without optical flows, which can better avoid the ghosting artifacts caused by error optical flow estimation and large motions. We have conducted extensive quantitative and qualitative assessments, and show that the proposed method is superior to the most state-of-the- art approaches.\",\"PeriodicalId\":268378,\"journal\":{\"name\":\"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CTISC52352.2021.00069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CTISC52352.2021.00069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Attention-guided Network for Deep High Dynamic Range Imaging
In natural scenes with multi-exposure image fusion (MEF), high dynamic range (HDR) imaging is often affected by moving objects or misalignments in the scene, resulting in ghosting artifacts in the final imaging results, with the help of optical flow method and deep network architecture. To avoid ghosting artifacts better, we propose a novel attention- guided neural network (ADeepHDR) to produce high-quality ghost-free HDR images. Unlike the previous methods, we use the attention module to guide the process of image merging. The attention module can detect the large motions and the notable parts of the different input features and enhance details in the results. Based on the attention module, we also try different subnetwork variants to make full use of the hierarchical features to get more ideal results. Besides, fractional-oder differential convolution is used in the subnetwork variant to extract more detailed features. The proposed ADeepHDR is an improvement method without optical flows, which can better avoid the ghosting artifacts caused by error optical flow estimation and large motions. We have conducted extensive quantitative and qualitative assessments, and show that the proposed method is superior to the most state-of-the- art approaches.