{"title":"基于深度学习的大气湍流退化图像恢复方法设计","authors":"Xiangxi Li, Haotong Ma, Junqiu Chu","doi":"10.1117/12.2668458","DOIUrl":null,"url":null,"abstract":"When imaging long-range targets using ground-based optical systems, the presence of atmospheric turbulence can cause blurring, dithering, and other degradations in the observed images. In previous research work, the research objects are mostly point targets, and the research work on the recovery of extended targets is yet to be perfected. With the rapid development of deep learning, neural networks driven by data can be used to obtain the recovered images directly by establishing a nonlinear mapping relationship between the degraded and original images. Therefore, the use of wavefront phase detection devices can be avoided by deep learning algorithms. The neural network directly reconstructs the original target of the turbulent degraded image that solves the problem of image blurring due to dynamic turbulence. In this paper, we propose DeblurNet, which employs the global self-attentive module. It improves channel and spatial information extraction, reduces information loss between network layers and global interaction representation to improve the performance of deep neural networks. DeblurNet is used to minimize the effect of turbulence on images and is validated on the NWPU-RESISC45 dataset. We refer to two image evaluation criteria, PSNR and SSIM. From the results, the direct reconstruction of the original target image by deep learning has a good recovery effect.","PeriodicalId":259102,"journal":{"name":"Optical Technology, Semiconductor Materials, and Devices","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The design of a method for recovering degraded images of atmospheric turbulence based on deep learning\",\"authors\":\"Xiangxi Li, Haotong Ma, Junqiu Chu\",\"doi\":\"10.1117/12.2668458\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When imaging long-range targets using ground-based optical systems, the presence of atmospheric turbulence can cause blurring, dithering, and other degradations in the observed images. In previous research work, the research objects are mostly point targets, and the research work on the recovery of extended targets is yet to be perfected. With the rapid development of deep learning, neural networks driven by data can be used to obtain the recovered images directly by establishing a nonlinear mapping relationship between the degraded and original images. Therefore, the use of wavefront phase detection devices can be avoided by deep learning algorithms. The neural network directly reconstructs the original target of the turbulent degraded image that solves the problem of image blurring due to dynamic turbulence. In this paper, we propose DeblurNet, which employs the global self-attentive module. It improves channel and spatial information extraction, reduces information loss between network layers and global interaction representation to improve the performance of deep neural networks. DeblurNet is used to minimize the effect of turbulence on images and is validated on the NWPU-RESISC45 dataset. We refer to two image evaluation criteria, PSNR and SSIM. From the results, the direct reconstruction of the original target image by deep learning has a good recovery effect.\",\"PeriodicalId\":259102,\"journal\":{\"name\":\"Optical Technology, Semiconductor Materials, and Devices\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optical Technology, Semiconductor Materials, and Devices\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2668458\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Technology, Semiconductor Materials, and Devices","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2668458","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The design of a method for recovering degraded images of atmospheric turbulence based on deep learning
When imaging long-range targets using ground-based optical systems, the presence of atmospheric turbulence can cause blurring, dithering, and other degradations in the observed images. In previous research work, the research objects are mostly point targets, and the research work on the recovery of extended targets is yet to be perfected. With the rapid development of deep learning, neural networks driven by data can be used to obtain the recovered images directly by establishing a nonlinear mapping relationship between the degraded and original images. Therefore, the use of wavefront phase detection devices can be avoided by deep learning algorithms. The neural network directly reconstructs the original target of the turbulent degraded image that solves the problem of image blurring due to dynamic turbulence. In this paper, we propose DeblurNet, which employs the global self-attentive module. It improves channel and spatial information extraction, reduces information loss between network layers and global interaction representation to improve the performance of deep neural networks. DeblurNet is used to minimize the effect of turbulence on images and is validated on the NWPU-RESISC45 dataset. We refer to two image evaluation criteria, PSNR and SSIM. From the results, the direct reconstruction of the original target image by deep learning has a good recovery effect.