基于邻域生成对抗网络的半配对图像到图像翻译

Le Xu, Weiling Cai, Honghan Zhou
{"title":"基于邻域生成对抗网络的半配对图像到图像翻译","authors":"Le Xu, Weiling Cai, Honghan Zhou","doi":"10.1109/IJCNN52387.2021.9534353","DOIUrl":null,"url":null,"abstract":"Image-to-image translation aims at learning the mapping between an input image and an output image using a training set of aligned image pairs. In reality, obtaining paired images is difficult and expensive. Generally, the data often exist in the form of partial pairing, that is, a small number of images are paired and most of the images are not paired. In this paper, we present a semi-paired image-to-image translation approach using neighbor-based generative adversarial networks. Our goal is to break the restriction that training images must be paired, and meanwhile guarantee the quality of image translation. For the unpaired images, we introduce an inverse mapping and cycle consistency loss to enforce the image reconstruction; for the paired images, we make full use of the one-to-one strong correlation to guide the image translation. To further take advantage of the paired images, our approach employs neighbor images to further expand the paired information and establishes the neighbor-based cycle consistency. Our method is characterized by flexibility and adaptability under various scenarios, such as target deformation, day-night transformation, etc. Compared with the previous methods, the experimental results prove the superiority of our method.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Semi-paired Image-to-Image Translation using Neighbor-based Generative Adversarial Networks\",\"authors\":\"Le Xu, Weiling Cai, Honghan Zhou\",\"doi\":\"10.1109/IJCNN52387.2021.9534353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image-to-image translation aims at learning the mapping between an input image and an output image using a training set of aligned image pairs. In reality, obtaining paired images is difficult and expensive. Generally, the data often exist in the form of partial pairing, that is, a small number of images are paired and most of the images are not paired. In this paper, we present a semi-paired image-to-image translation approach using neighbor-based generative adversarial networks. Our goal is to break the restriction that training images must be paired, and meanwhile guarantee the quality of image translation. For the unpaired images, we introduce an inverse mapping and cycle consistency loss to enforce the image reconstruction; for the paired images, we make full use of the one-to-one strong correlation to guide the image translation. To further take advantage of the paired images, our approach employs neighbor images to further expand the paired information and establishes the neighbor-based cycle consistency. Our method is characterized by flexibility and adaptability under various scenarios, such as target deformation, day-night transformation, etc. Compared with the previous methods, the experimental results prove the superiority of our method.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9534353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9534353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

图像到图像转换的目的是使用对齐图像对的训练集来学习输入图像和输出图像之间的映射。在现实中,获得配对图像是困难和昂贵的。通常,数据往往以部分配对的形式存在,即少数图像配对,大部分图像不配对。在本文中,我们提出了一种使用基于邻居的生成对抗网络的半配对图像到图像翻译方法。我们的目标是打破训练图像必须配对的限制,同时保证图像翻译的质量。对于未配对的图像,我们引入了逆映射和周期一致性损失来强制图像重建;对于配对后的图像,我们充分利用一对一的强相关性来指导图像的翻译。为了进一步利用配对图像,我们的方法利用邻居图像进一步扩展配对信息,并建立基于邻居的循环一致性。该方法在目标变形、昼夜变换等多种场景下具有灵活性和适应性。与以往的方法相比,实验结果证明了该方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Semi-paired Image-to-Image Translation using Neighbor-based Generative Adversarial Networks
Image-to-image translation aims at learning the mapping between an input image and an output image using a training set of aligned image pairs. In reality, obtaining paired images is difficult and expensive. Generally, the data often exist in the form of partial pairing, that is, a small number of images are paired and most of the images are not paired. In this paper, we present a semi-paired image-to-image translation approach using neighbor-based generative adversarial networks. Our goal is to break the restriction that training images must be paired, and meanwhile guarantee the quality of image translation. For the unpaired images, we introduce an inverse mapping and cycle consistency loss to enforce the image reconstruction; for the paired images, we make full use of the one-to-one strong correlation to guide the image translation. To further take advantage of the paired images, our approach employs neighbor images to further expand the paired information and establishes the neighbor-based cycle consistency. Our method is characterized by flexibility and adaptability under various scenarios, such as target deformation, day-night transformation, etc. Compared with the previous methods, the experimental results prove the superiority of our method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信