Rdfinet: reference-guided directional diverse face inpainting network

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qingyang Chen, Zhengping Qiang, Yue Zhao, Hong Lin, Libo He, Fei Dai
{"title":"Rdfinet: reference-guided directional diverse face inpainting network","authors":"Qingyang Chen, Zhengping Qiang, Yue Zhao, Hong Lin, Libo He, Fei Dai","doi":"10.1007/s40747-024-01543-8","DOIUrl":null,"url":null,"abstract":"<p>The majority of existing face inpainting methods primarily focus on generating a single result that visually resembles the original image. The generation of diverse and plausible results has emerged as a new branch in image restoration, often referred to as “Pluralistic Image Completion”. However, most diversity methods simply use random latent vectors to generate multiple results, leading to uncontrollable outcomes. To overcome these limitations, we introduce a novel architecture known as the Reference-Guided Directional Diverse Face Inpainting Network. In this paper, instead of using a background image as reference, which is typically used in image restoration, we have used a face image, which can have many different characteristics from the original image, including but not limited to gender and age, to serve as a reference face style. Our network firstly infers the semantic information of the masked face, i.e., the face parsing map, based on the partial image and its mask, which subsequently guides and constrains directional diverse generator network. The network will learn the distribution of face images from different domains in a low-dimensional manifold space. To validate our method, we conducted extensive experiments on the CelebAMask-HQ dataset. Our method not only produces high-quality oriented diverse results but also complements the images with the style of the reference face image. Additionally, our diverse results maintain correct facial feature distribution and sizes, rather than being random. Our network has achieved SOTA results in face diverse inpainting when writing. Code will is available at https://github.com/nothingwithyou/RDFINet.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01543-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The majority of existing face inpainting methods primarily focus on generating a single result that visually resembles the original image. The generation of diverse and plausible results has emerged as a new branch in image restoration, often referred to as “Pluralistic Image Completion”. However, most diversity methods simply use random latent vectors to generate multiple results, leading to uncontrollable outcomes. To overcome these limitations, we introduce a novel architecture known as the Reference-Guided Directional Diverse Face Inpainting Network. In this paper, instead of using a background image as reference, which is typically used in image restoration, we have used a face image, which can have many different characteristics from the original image, including but not limited to gender and age, to serve as a reference face style. Our network firstly infers the semantic information of the masked face, i.e., the face parsing map, based on the partial image and its mask, which subsequently guides and constrains directional diverse generator network. The network will learn the distribution of face images from different domains in a low-dimensional manifold space. To validate our method, we conducted extensive experiments on the CelebAMask-HQ dataset. Our method not only produces high-quality oriented diverse results but also complements the images with the style of the reference face image. Additionally, our diverse results maintain correct facial feature distribution and sizes, rather than being random. Our network has achieved SOTA results in face diverse inpainting when writing. Code will is available at https://github.com/nothingwithyou/RDFINet.

Abstract Image

Rdfinet:参考引导的定向多样化人脸着色网络
现有的大多数人脸涂色方法主要侧重于生成视觉上与原始图像相似的单一结果。生成多样且可信的结果已成为图像修复的一个新分支,通常被称为 "多元图像补全"。然而,大多数多样性方法只是简单地使用随机潜向量来生成多种结果,导致结果不可控。为了克服这些局限性,我们引入了一种新颖的架构,即 "参考引导的定向多样性人脸涂色网络"。在本文中,我们没有使用通常在图像修复中使用的背景图像作为参考,而是使用了人脸图像作为参考人脸样式,人脸图像可以有许多不同于原始图像的特征,包括但不限于性别和年龄。我们的网络首先根据局部图像及其遮罩推断出被遮罩人脸的语义信息,即人脸解析图,然后对定向多样化生成器网络进行指导和约束。该网络将学习来自不同领域的人脸图像在低维流形空间中的分布。为了验证我们的方法,我们在 CelebAMask-HQ 数据集上进行了大量实验。我们的方法不仅能生成高质量的面向多样化的结果,还能根据参考人脸图像的风格对图像进行补充。此外,我们的多样化结果保持了正确的面部特征分布和大小,而不是随机的。我们的网络在编写人脸多样化内绘时取得了 SOTA 结果。代码可在 https://github.com/nothingwithyou/RDFINet 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信