{"title":"Image upsampling via texture hallucination","authors":"Yoav HaCohen, Raanan Fattal, Dani Lischinski","doi":"10.1109/ICCPHOT.2010.5585097","DOIUrl":null,"url":null,"abstract":"Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"80","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPHOT.2010.5585097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 80
Abstract
Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.