{"title":"尺度递归生成网络在图像绘制中的研究","authors":"Ziyi Zhang, Dong Lyu, Wei Xu","doi":"10.1109/ASICON52560.2021.9620457","DOIUrl":null,"url":null,"abstract":"Existing learning-based inpainting methods have recently reached notable success in filling irregular holes. However, the quantity of network parameters in these methods also grows rapidly, thus making them difficult for training and deployment on resource-limited platforms. In this paper, we propose a Scale Recurrent Generative Network (SRGN), in which a new scale recurrent structure is raised and deployed on top of the general learning-based inpainting methods. The scale recurrent procedure stores the context information in different scales to achieve better memorability while keeping the network parameters in the same order of magnitude. To add the iterations on scale dimension, we combine max pooling and average pooling in the downsampling procedure and introduce scale factor in the loss function. The qualitative and quantitative comparisons on the Places2 dataset show that the texture and detail of our generated image are significantly improved in comparison with peer works.","PeriodicalId":233584,"journal":{"name":"2021 IEEE 14th International Conference on ASIC (ASICON)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research of Scale Recurrent Generative Network on Image Inpainting\",\"authors\":\"Ziyi Zhang, Dong Lyu, Wei Xu\",\"doi\":\"10.1109/ASICON52560.2021.9620457\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing learning-based inpainting methods have recently reached notable success in filling irregular holes. However, the quantity of network parameters in these methods also grows rapidly, thus making them difficult for training and deployment on resource-limited platforms. In this paper, we propose a Scale Recurrent Generative Network (SRGN), in which a new scale recurrent structure is raised and deployed on top of the general learning-based inpainting methods. The scale recurrent procedure stores the context information in different scales to achieve better memorability while keeping the network parameters in the same order of magnitude. To add the iterations on scale dimension, we combine max pooling and average pooling in the downsampling procedure and introduce scale factor in the loss function. The qualitative and quantitative comparisons on the Places2 dataset show that the texture and detail of our generated image are significantly improved in comparison with peer works.\",\"PeriodicalId\":233584,\"journal\":{\"name\":\"2021 IEEE 14th International Conference on ASIC (ASICON)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 14th International Conference on ASIC (ASICON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASICON52560.2021.9620457\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 14th International Conference on ASIC (ASICON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASICON52560.2021.9620457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research of Scale Recurrent Generative Network on Image Inpainting
Existing learning-based inpainting methods have recently reached notable success in filling irregular holes. However, the quantity of network parameters in these methods also grows rapidly, thus making them difficult for training and deployment on resource-limited platforms. In this paper, we propose a Scale Recurrent Generative Network (SRGN), in which a new scale recurrent structure is raised and deployed on top of the general learning-based inpainting methods. The scale recurrent procedure stores the context information in different scales to achieve better memorability while keeping the network parameters in the same order of magnitude. To add the iterations on scale dimension, we combine max pooling and average pooling in the downsampling procedure and introduce scale factor in the loss function. The qualitative and quantitative comparisons on the Places2 dataset show that the texture and detail of our generated image are significantly improved in comparison with peer works.