Thanh Hien Truong, Tae-Ho Lee, Viduranga Munasinghe, Tae Sung Kim, Jin-Sung Kim, Hyuk-Jae Lee
{"title":"Inpainting GAN-Based Image Blending with Adaptive Binary Line Mask","authors":"Thanh Hien Truong, Tae-Ho Lee, Viduranga Munasinghe, Tae Sung Kim, Jin-Sung Kim, Hyuk-Jae Lee","doi":"10.33851/jmis.2023.10.3.227","DOIUrl":null,"url":null,"abstract":"Image blending is a scheme for image composition to make the composite image looks as natural and realistic as possible. Image blending should ensure that the edges of the object look seamless and do not distort colors. Recently, numerous studies investigated image blending methods adopting deep learning-based image processing algorithms and contributed to generating natural blended images. Although the previous studies show remarkable performance in many cases, they suffer from quality drop when blending incompletely cropped object. This is because partial loss and unnecessary extra information on the cropped object image interferes with image blending. This paper proposes a new scheme that significantly reduce the unnatural edges and the color distortion. First, to detect and handle the incompletely cropped region, an adaptive binary line mask generation utilizing color difference checking algorithm (CDC) is proposed. The generated mask is exploited to improve image blending performance by isolating incompletely cropped image edges from image blending. Second, in order to perform inpainting the missing or masked area of the object image and image blending together, the inpainting generative adversarial model is adopted. Experimental results show that the blended images are not only more natural than those of the previous works but the color information is also well preserved.","PeriodicalId":477174,"journal":{"name":"Journal of multimedia information system","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of multimedia information system","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33851/jmis.2023.10.3.227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image blending is a scheme for image composition to make the composite image looks as natural and realistic as possible. Image blending should ensure that the edges of the object look seamless and do not distort colors. Recently, numerous studies investigated image blending methods adopting deep learning-based image processing algorithms and contributed to generating natural blended images. Although the previous studies show remarkable performance in many cases, they suffer from quality drop when blending incompletely cropped object. This is because partial loss and unnecessary extra information on the cropped object image interferes with image blending. This paper proposes a new scheme that significantly reduce the unnatural edges and the color distortion. First, to detect and handle the incompletely cropped region, an adaptive binary line mask generation utilizing color difference checking algorithm (CDC) is proposed. The generated mask is exploited to improve image blending performance by isolating incompletely cropped image edges from image blending. Second, in order to perform inpainting the missing or masked area of the object image and image blending together, the inpainting generative adversarial model is adopted. Experimental results show that the blended images are not only more natural than those of the previous works but the color information is also well preserved.