{"title":"Blending texture features from multiple reference images for style transfer","authors":"Hikaru Ikuta, Keisuke Ogaki, Yuri Odagiri","doi":"10.1145/3005358.3005388","DOIUrl":null,"url":null,"abstract":"We present an algorithm that learns a desired style of artwork from a collection of images and transfers this style to an arbitrary image. Our method is based on the observation that the style of artwork is not characterized by the features of one work, but rather by the features that commonly appear within a collection of works. To learn such a representation of style, a sufficiently large dataset of images created in the same style is necessary. We present a novel illustration dataset that contains 500,000 images mainly consisting of digital paintings, annotated with rich information such as tags, comments, etc. We utilize a feature space constructed from statistical properties of CNN feature responses, and represent the style as a closed region within the feature space. We present experimental results that show the closed region is capable of synthesizing an appropriate texture that belongs to the desired style, and is capable of transferring the synthesized texture to a given input image.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH ASIA 2016 Technical Briefs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3005358.3005388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
We present an algorithm that learns a desired style of artwork from a collection of images and transfers this style to an arbitrary image. Our method is based on the observation that the style of artwork is not characterized by the features of one work, but rather by the features that commonly appear within a collection of works. To learn such a representation of style, a sufficiently large dataset of images created in the same style is necessary. We present a novel illustration dataset that contains 500,000 images mainly consisting of digital paintings, annotated with rich information such as tags, comments, etc. We utilize a feature space constructed from statistical properties of CNN feature responses, and represent the style as a closed region within the feature space. We present experimental results that show the closed region is capable of synthesizing an appropriate texture that belongs to the desired style, and is capable of transferring the synthesized texture to a given input image.