{"title":"基于特征仿射变换的图像超分辨率","authors":"Chih-Chung Hsu, Chia-Wen Lin","doi":"10.1109/MMSP.2011.6093845","DOIUrl":null,"url":null,"abstract":"State-of-the-art image super-resolution methods usually rely on search in a comprehensive dataset for appropriate high-resolution patch candidates to achieve good visual quality of reconstructed image. Exploiting different scales and orientations in images can effectively enrich a dataset. A large dataset, however, usually leads to high computational complexity and memory requirement, which makes the implementation impractical. This paper proposes a universal framework for enriching the dataset for search-based super-resolution schemes with reasonable computation and memory cost. Toward this end, the proposed method first extracts important features with multiple scales and orientations of patches based on the SIFT (Scale-invariant feature transform) descriptors and then use the extracted features to search in the dataset for the best-match HR patch(es). Once the matched features of patches are found, the found HR patch will be aligned with LR patch using homography estimation. Experimental results demonstrate that the proposed method achieves significant subjective and objective improvement when integrated with several state-of-the-art image super-resolution methods without significantly increasing the cost.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Image super-resolution via feature-based affine transform\",\"authors\":\"Chih-Chung Hsu, Chia-Wen Lin\",\"doi\":\"10.1109/MMSP.2011.6093845\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"State-of-the-art image super-resolution methods usually rely on search in a comprehensive dataset for appropriate high-resolution patch candidates to achieve good visual quality of reconstructed image. Exploiting different scales and orientations in images can effectively enrich a dataset. A large dataset, however, usually leads to high computational complexity and memory requirement, which makes the implementation impractical. This paper proposes a universal framework for enriching the dataset for search-based super-resolution schemes with reasonable computation and memory cost. Toward this end, the proposed method first extracts important features with multiple scales and orientations of patches based on the SIFT (Scale-invariant feature transform) descriptors and then use the extracted features to search in the dataset for the best-match HR patch(es). Once the matched features of patches are found, the found HR patch will be aligned with LR patch using homography estimation. Experimental results demonstrate that the proposed method achieves significant subjective and objective improvement when integrated with several state-of-the-art image super-resolution methods without significantly increasing the cost.\",\"PeriodicalId\":214459,\"journal\":{\"name\":\"2011 IEEE 13th International Workshop on Multimedia Signal Processing\",\"volume\":\"116 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 13th International Workshop on Multimedia Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2011.6093845\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2011.6093845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image super-resolution via feature-based affine transform
State-of-the-art image super-resolution methods usually rely on search in a comprehensive dataset for appropriate high-resolution patch candidates to achieve good visual quality of reconstructed image. Exploiting different scales and orientations in images can effectively enrich a dataset. A large dataset, however, usually leads to high computational complexity and memory requirement, which makes the implementation impractical. This paper proposes a universal framework for enriching the dataset for search-based super-resolution schemes with reasonable computation and memory cost. Toward this end, the proposed method first extracts important features with multiple scales and orientations of patches based on the SIFT (Scale-invariant feature transform) descriptors and then use the extracted features to search in the dataset for the best-match HR patch(es). Once the matched features of patches are found, the found HR patch will be aligned with LR patch using homography estimation. Experimental results demonstrate that the proposed method achieves significant subjective and objective improvement when integrated with several state-of-the-art image super-resolution methods without significantly increasing the cost.