Lin Cheng, Wenqing Hong, Xiaodong Wang, Chuanming Liu, Junbo Su, Lan Su, Chen Zhang
{"title":"A grayscale mapping method for infrared images based on generative adversarial networks","authors":"Lin Cheng, Wenqing Hong, Xiaodong Wang, Chuanming Liu, Junbo Su, Lan Su, Chen Zhang","doi":"10.1117/12.3005506","DOIUrl":null,"url":null,"abstract":"The grayscale mapping of infrared images is an important research direction in the field of infrared image visualization. The grayscale mapping method of infrared images directly determines important visualization indicators such as detail preservation and overall perception of the original infrared images and can be considered as the foundation and guarantee for detail enhancement. Although the current mainstream grayscale mapping methods for infrared images can achieve good mapping results, there is still room for improvement in terms of preserving image details and enhancing image contrast. In this paper, we propose a grayscale mapping method for infrared images based on generative adversarial networks. Firstly, our discriminator adopts a unique global-local structure, which allows the network to consider both global and local losses when calculating the loss, effectively improving the image quality in local regions of the mapped image. Secondly, we introduce perceptual loss in the loss function, which ensures that the generated image and the target image have consistent features as much as possible. We conducted subjective and objective evaluations on the mapping results of our method and eight mainstream methods. The evaluation results show that our method is superior in terms of preserving image details and enhancing image contrast. Further comparison with a parameter-free tone mapping operator using generative adversarial network (TMO-Net) indicates that our method avoids problems such as target edge blur and artifacts in the mapped images, resulting in higher visual quality of the mapped images.","PeriodicalId":502341,"journal":{"name":"Applied Optics and Photonics China","volume":"3 3","pages":"129660J - 129660J-15"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Optics and Photonics China","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3005506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The grayscale mapping of infrared images is an important research direction in the field of infrared image visualization. The grayscale mapping method of infrared images directly determines important visualization indicators such as detail preservation and overall perception of the original infrared images and can be considered as the foundation and guarantee for detail enhancement. Although the current mainstream grayscale mapping methods for infrared images can achieve good mapping results, there is still room for improvement in terms of preserving image details and enhancing image contrast. In this paper, we propose a grayscale mapping method for infrared images based on generative adversarial networks. Firstly, our discriminator adopts a unique global-local structure, which allows the network to consider both global and local losses when calculating the loss, effectively improving the image quality in local regions of the mapped image. Secondly, we introduce perceptual loss in the loss function, which ensures that the generated image and the target image have consistent features as much as possible. We conducted subjective and objective evaluations on the mapping results of our method and eight mainstream methods. The evaluation results show that our method is superior in terms of preserving image details and enhancing image contrast. Further comparison with a parameter-free tone mapping operator using generative adversarial network (TMO-Net) indicates that our method avoids problems such as target edge blur and artifacts in the mapped images, resulting in higher visual quality of the mapped images.