{"title":"基于密集连接的单幅图像超分辨率高级生成对抗网络","authors":"Sheng Chen, Sumei Li, Chengcheng Zhu","doi":"10.1109/NSENS49395.2019.9293953","DOIUrl":null,"url":null,"abstract":"The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating more realistic texture in semantics and style during single image super-resolution. However, Since the loss function adopts L2 norm based on pixel points, the hallucinated details are often accompanied with unpleasant artifacts even false pixels. Our model adjusts generative loss to L1 norm, and perceptual loss is still based on L2 norm. L1 cost function can reduce the coefficients of some features to zero, thus indirectly realizing the selection of features according to the perceptual loss, and obtaining more real texture features. The combination of these two loss functions ensures that the reconstructed results of the model are very close to the target image in terms of spatial features, high-level abstract features and semantic features, overall sensory and image quality. The generating network of our model is based on dense residual structure, and the dense connection of residual-in-residual is used to implement fast and accurate learning of high frequency features of images. The adversarial network is based on the structure of discriminators in DCGAN and WGAN. Experimental results show that subjective quality we reconstructed is much higher than SRGAN.","PeriodicalId":246485,"journal":{"name":"2019 IEEE THE 2nd INTERNATIONAL CONFERENCE ON MICRO/NANO SENSORS for AI, HEALTHCARE, AND ROBOTICS (NSENS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advanced Generative Adversarial Network Based on Dense Connection For Single Image Super Resolution\",\"authors\":\"Sheng Chen, Sumei Li, Chengcheng Zhu\",\"doi\":\"10.1109/NSENS49395.2019.9293953\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating more realistic texture in semantics and style during single image super-resolution. However, Since the loss function adopts L2 norm based on pixel points, the hallucinated details are often accompanied with unpleasant artifacts even false pixels. Our model adjusts generative loss to L1 norm, and perceptual loss is still based on L2 norm. L1 cost function can reduce the coefficients of some features to zero, thus indirectly realizing the selection of features according to the perceptual loss, and obtaining more real texture features. The combination of these two loss functions ensures that the reconstructed results of the model are very close to the target image in terms of spatial features, high-level abstract features and semantic features, overall sensory and image quality. The generating network of our model is based on dense residual structure, and the dense connection of residual-in-residual is used to implement fast and accurate learning of high frequency features of images. The adversarial network is based on the structure of discriminators in DCGAN and WGAN. Experimental results show that subjective quality we reconstructed is much higher than SRGAN.\",\"PeriodicalId\":246485,\"journal\":{\"name\":\"2019 IEEE THE 2nd INTERNATIONAL CONFERENCE ON MICRO/NANO SENSORS for AI, HEALTHCARE, AND ROBOTICS (NSENS)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE THE 2nd INTERNATIONAL CONFERENCE ON MICRO/NANO SENSORS for AI, HEALTHCARE, AND ROBOTICS (NSENS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NSENS49395.2019.9293953\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE THE 2nd INTERNATIONAL CONFERENCE ON MICRO/NANO SENSORS for AI, HEALTHCARE, AND ROBOTICS (NSENS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSENS49395.2019.9293953","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Advanced Generative Adversarial Network Based on Dense Connection For Single Image Super Resolution
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating more realistic texture in semantics and style during single image super-resolution. However, Since the loss function adopts L2 norm based on pixel points, the hallucinated details are often accompanied with unpleasant artifacts even false pixels. Our model adjusts generative loss to L1 norm, and perceptual loss is still based on L2 norm. L1 cost function can reduce the coefficients of some features to zero, thus indirectly realizing the selection of features according to the perceptual loss, and obtaining more real texture features. The combination of these two loss functions ensures that the reconstructed results of the model are very close to the target image in terms of spatial features, high-level abstract features and semantic features, overall sensory and image quality. The generating network of our model is based on dense residual structure, and the dense connection of residual-in-residual is used to implement fast and accurate learning of high frequency features of images. The adversarial network is based on the structure of discriminators in DCGAN and WGAN. Experimental results show that subjective quality we reconstructed is much higher than SRGAN.