{"title":"MIC-GAN:使用条件生成对抗网络的多视图辅助图像补全","authors":"Gagan Kanojia, S. Raman","doi":"10.1109/NCC48643.2020.9056062","DOIUrl":null,"url":null,"abstract":"Consider a set of images of a scene captured from multiple views with some missing regions in each image. In this work, we propose a convolutional neural network (CNN) architecture which fills the missing regions in one image using the information present in the remaining images. The network takes the set of images and their corresponding binary maps as inputs and generates an image with the completed missing regions. The binary map indicates the missing regions present in the corresponding image. The network is trained using an adversarial approach and is observed to generate sharp output images qualitatively. We evaluate the performance of the proposed approach on the dataset extracted from the standard dataset, MVS-Synth.","PeriodicalId":183772,"journal":{"name":"2020 National Conference on Communications (NCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"MIC-GAN: Multi-view Assisted Image Completion Using Conditional Generative Adversarial Networks\",\"authors\":\"Gagan Kanojia, S. Raman\",\"doi\":\"10.1109/NCC48643.2020.9056062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Consider a set of images of a scene captured from multiple views with some missing regions in each image. In this work, we propose a convolutional neural network (CNN) architecture which fills the missing regions in one image using the information present in the remaining images. The network takes the set of images and their corresponding binary maps as inputs and generates an image with the completed missing regions. The binary map indicates the missing regions present in the corresponding image. The network is trained using an adversarial approach and is observed to generate sharp output images qualitatively. We evaluate the performance of the proposed approach on the dataset extracted from the standard dataset, MVS-Synth.\",\"PeriodicalId\":183772,\"journal\":{\"name\":\"2020 National Conference on Communications (NCC)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 National Conference on Communications (NCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NCC48643.2020.9056062\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC48643.2020.9056062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MIC-GAN: Multi-view Assisted Image Completion Using Conditional Generative Adversarial Networks
Consider a set of images of a scene captured from multiple views with some missing regions in each image. In this work, we propose a convolutional neural network (CNN) architecture which fills the missing regions in one image using the information present in the remaining images. The network takes the set of images and their corresponding binary maps as inputs and generates an image with the completed missing regions. The binary map indicates the missing regions present in the corresponding image. The network is trained using an adversarial approach and is observed to generate sharp output images qualitatively. We evaluate the performance of the proposed approach on the dataset extracted from the standard dataset, MVS-Synth.