{"title":"基于顶帽变换和小波变换的图像融合性能评价","authors":"Chen Songchao, Dr.B. Sujatha, G. Karuna","doi":"10.21742/IJSBT.2014.2.2.01","DOIUrl":null,"url":null,"abstract":"Image fusion is a process in which a high-resolution Panchromatic Image (PAN) is combined with a low-resolution Multispectral Image (MS) to form a new single image that contains both the spatial information of the PAN image and the spectral information of the MS image. In the present work, an algorithm for image fusion based on the Wavelet Transform (WT) is implemented, analyzed, and compared with the top-hat transform algorithm. The decimated and undecimated wavelets used in image fusion can be categorized into three classes: Orthogonal, Biorthogonal, and Nonorthogonal. Fusion results are evaluated and compared using various measures of performance and the results show that the undecimated biorthogonal wavelet-based fusion method performs the fusion of PAN image and MS image better than top-hat transform fusion method, decimated orthogonally, decimated biorthogonal, and undecimated orthogonal wavelet-based fusion methods, especially in preserving both spectral and spatial information. The experiment is conducted on the IRS-1D images using LISS III scanner for the locations Vishakhapatnam and Hyderabad, India, and on Quick Bird image data and Losangels image data. The results show that the proposed WT fusion method works well in multi-resolution fusion and also preserves the original color or spectral characteristics of the input image data. In addition, the fused image has a better eye perception than the input ones.","PeriodicalId":448069,"journal":{"name":"International Journal of Smart Business and Technology","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Performance Measures for Image Fusion based on Top-hat Transform and Wavelet Transform\",\"authors\":\"Chen Songchao, Dr.B. Sujatha, G. Karuna\",\"doi\":\"10.21742/IJSBT.2014.2.2.01\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image fusion is a process in which a high-resolution Panchromatic Image (PAN) is combined with a low-resolution Multispectral Image (MS) to form a new single image that contains both the spatial information of the PAN image and the spectral information of the MS image. In the present work, an algorithm for image fusion based on the Wavelet Transform (WT) is implemented, analyzed, and compared with the top-hat transform algorithm. The decimated and undecimated wavelets used in image fusion can be categorized into three classes: Orthogonal, Biorthogonal, and Nonorthogonal. Fusion results are evaluated and compared using various measures of performance and the results show that the undecimated biorthogonal wavelet-based fusion method performs the fusion of PAN image and MS image better than top-hat transform fusion method, decimated orthogonally, decimated biorthogonal, and undecimated orthogonal wavelet-based fusion methods, especially in preserving both spectral and spatial information. The experiment is conducted on the IRS-1D images using LISS III scanner for the locations Vishakhapatnam and Hyderabad, India, and on Quick Bird image data and Losangels image data. The results show that the proposed WT fusion method works well in multi-resolution fusion and also preserves the original color or spectral characteristics of the input image data. In addition, the fused image has a better eye perception than the input ones.\",\"PeriodicalId\":448069,\"journal\":{\"name\":\"International Journal of Smart Business and Technology\",\"volume\":\"86 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Smart Business and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21742/IJSBT.2014.2.2.01\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Smart Business and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21742/IJSBT.2014.2.2.01","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance Measures for Image Fusion based on Top-hat Transform and Wavelet Transform
Image fusion is a process in which a high-resolution Panchromatic Image (PAN) is combined with a low-resolution Multispectral Image (MS) to form a new single image that contains both the spatial information of the PAN image and the spectral information of the MS image. In the present work, an algorithm for image fusion based on the Wavelet Transform (WT) is implemented, analyzed, and compared with the top-hat transform algorithm. The decimated and undecimated wavelets used in image fusion can be categorized into three classes: Orthogonal, Biorthogonal, and Nonorthogonal. Fusion results are evaluated and compared using various measures of performance and the results show that the undecimated biorthogonal wavelet-based fusion method performs the fusion of PAN image and MS image better than top-hat transform fusion method, decimated orthogonally, decimated biorthogonal, and undecimated orthogonal wavelet-based fusion methods, especially in preserving both spectral and spatial information. The experiment is conducted on the IRS-1D images using LISS III scanner for the locations Vishakhapatnam and Hyderabad, India, and on Quick Bird image data and Losangels image data. The results show that the proposed WT fusion method works well in multi-resolution fusion and also preserves the original color or spectral characteristics of the input image data. In addition, the fused image has a better eye perception than the input ones.