{"title":"High-Resolution Refocusing for Defocused ISAR Images by Complex-Valued Pix2pixHD Network","authors":"Haoxuan Yuan, Hongbo Li, Yun Zhang, Yong Wang, Zitao Liu, Chenxi Wei, Chengxin Yao","doi":"10.1109/LGRS.2022.3210036","DOIUrl":null,"url":null,"abstract":"Inverse synthetic aperture radar (ISAR) is an effective detection method for targets. However, for the maneuvering targets, the Doppler frequency induced by an arbitrary scatterer on the target is time-varying, which will cause defocus on ISAR images and bring difficulties for the further recognition process. It is hard for traditional methods to well refocus all positions on the target well. In recent years, generative adversarial networks (GANs) achieve great success in image translation. However, the current refocusing models ignore the information of high-order terms containing in the relationship between real and imaginary parts of the data. To this end, an end-to-end refocusing network, named complex-valued pix2pixHD (CVPHD), is proposed to learn the mapping from defocus to focus, which utilizes complex-valued (CV) ISAR images as an input. A CV instance normalization layer is applied to mine the deep relationship between the complex parts by calculating the covariance of them and accelerate the training. Subsequently, an innovative adaptively weighted loss function is put forward to improve the overall refocusing effect. Finally, the proposed CVPHD is tested with the simulated and real dataset, and both can get well-refocused results. The results of comparative experiments show that the refocusing error can be reduced if extending the pix2pixHD network to the CV domain and the performance of CVPHD surpasses other autofocus methods in refocusing effects. The code and dataset have been available online (https://github.com/yhx-hit/CVPHD).","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"19 1","pages":"1-5"},"PeriodicalIF":4.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Geoscience and Remote Sensing Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/LGRS.2022.3210036","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 3
Abstract
Inverse synthetic aperture radar (ISAR) is an effective detection method for targets. However, for the maneuvering targets, the Doppler frequency induced by an arbitrary scatterer on the target is time-varying, which will cause defocus on ISAR images and bring difficulties for the further recognition process. It is hard for traditional methods to well refocus all positions on the target well. In recent years, generative adversarial networks (GANs) achieve great success in image translation. However, the current refocusing models ignore the information of high-order terms containing in the relationship between real and imaginary parts of the data. To this end, an end-to-end refocusing network, named complex-valued pix2pixHD (CVPHD), is proposed to learn the mapping from defocus to focus, which utilizes complex-valued (CV) ISAR images as an input. A CV instance normalization layer is applied to mine the deep relationship between the complex parts by calculating the covariance of them and accelerate the training. Subsequently, an innovative adaptively weighted loss function is put forward to improve the overall refocusing effect. Finally, the proposed CVPHD is tested with the simulated and real dataset, and both can get well-refocused results. The results of comparative experiments show that the refocusing error can be reduced if extending the pix2pixHD network to the CV domain and the performance of CVPHD surpasses other autofocus methods in refocusing effects. The code and dataset have been available online (https://github.com/yhx-hit/CVPHD).
期刊介绍:
IEEE Geoscience and Remote Sensing Letters (GRSL) is a monthly publication for short papers (maximum length 5 pages) addressing new ideas and formative concepts in remote sensing as well as important new and timely results and concepts. Papers should relate to the theory, concepts and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space, and the processing, interpretation, and dissemination of this information. The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions. GRSL encourages the incorporation of "extended objects" or "multimedia" such as animations to enhance the shorter papers.