Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima
{"title":"BGMaker: example-based anime background image creation from a photograph","authors":"Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2787626.2787646","DOIUrl":null,"url":null,"abstract":"Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2015 Posters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2787626.2787646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.
动画设计师经常根据照片绘制真实的风景作为背景图像,以补充角色。由于绘制背景风景耗时且成本低,因此对可以将照片转换为动画风格图形的技术有很高的需求。先前用于此目的的方法,如图像绗缝[Efros和Freeman 2001]将源纹理转移到目标照片上。这些方法与照片中的目标元素合成相应的源补丁,并通过PatchMatch等最近邻搜索实现对应[Barnes et al. 2009]。然而,由于照片和动画背景图像在颜色和纹理上的不同,最近邻居的补丁并不总是最适合动画转移的补丁。例如,现实世界的颜色需要转换成动画的特定颜色;此外,实现动画效果所需的笔触类型对于不同的照片元素(例如天空,山,草)是不同的。因此,为了得到最合适的补丁,我们提出了一种方法,在局部补丁匹配之前先建立全局区域对应关系。在我们提出的BGMaker方法中,(1)我们将真实图像和动画图像划分为区域;(2)然后根据颜色和纹理特征自动获取各区域之间的对应关系;(3)在相应区域内搜索并合成最合适的patch。我们在本文中的主要贡献是一种自动获取不同颜色和纹理的目标区域和源区域之间对应关系的方法,使我们能够在保留源图像细节的同时生成动画背景图像。