{"title":"Coupled Adversarial Learning for Single Image Super-Resolution","authors":"Chih-Chung Hsu, Kuan Huang","doi":"10.1109/SAM48682.2020.9104288","DOIUrl":null,"url":null,"abstract":"Generative adversarial nets (GAN) have been widely used in several image restoration tasks such as image denoise, enhancement, and super-resolution. The objective functions of an image super-resolution problem based on GANs usually are reconstruction error, semantic feature distance, and GAN loss. In general, semantic feature distance was used to measure the feature similarity between the super-resolved and ground-truth images, to ensure they have similar feature representations. However, the feature is usually extracted by the pre-trained model, in which the feature representation is not designed for distinguishing the extracted features from low-resolution and high-resolution images. In this study, a coupled adversarial net (CAN) based on Siamese Network Structure is proposed, to improve the effectiveness of the feature extraction. In the proposed CAN, we offer GAN loss and semantic feature distances simultaneously, reducing the training complexity as well as improving the performance. Extensive experiments conducted that the proposed CAN is effective and efficient, compared to state-of-the-art image super-resolution schemes.","PeriodicalId":6753,"journal":{"name":"2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM)","volume":"35 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAM48682.2020.9104288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Generative adversarial nets (GAN) have been widely used in several image restoration tasks such as image denoise, enhancement, and super-resolution. The objective functions of an image super-resolution problem based on GANs usually are reconstruction error, semantic feature distance, and GAN loss. In general, semantic feature distance was used to measure the feature similarity between the super-resolved and ground-truth images, to ensure they have similar feature representations. However, the feature is usually extracted by the pre-trained model, in which the feature representation is not designed for distinguishing the extracted features from low-resolution and high-resolution images. In this study, a coupled adversarial net (CAN) based on Siamese Network Structure is proposed, to improve the effectiveness of the feature extraction. In the proposed CAN, we offer GAN loss and semantic feature distances simultaneously, reducing the training complexity as well as improving the performance. Extensive experiments conducted that the proposed CAN is effective and efficient, compared to state-of-the-art image super-resolution schemes.