{"title":"基于拉普拉斯的图像融合","authors":"J. Scott, M. Pusateri","doi":"10.1109/AIPR.2010.5759697","DOIUrl":null,"url":null,"abstract":"A fundamental goal in multispectral image fusion is to combine relevant information from multiple spectral ranges while displaying a constant amount of data as a single channel. Because we expect synergy between the views afforded by different parts of the spectrum, producing output imagery with increased information beyond any of the individual imagery sounds simple. While fusion algorithms achieve synergy under specific scenarios, it is often the case that they produce imagery with less information than any single band of imagery. Losses can arise from any number of problems including poor imagery in one band degrading the fusion result, loss of details from intrinsic smoothing, artifacts or discontinuities from discrete mixing, and distracting colors from unnatural color mapping. We have been developing and testing fusion algorithms with the goal of achieving synergy under a wider range of scenarios. This technique has been very successful in the worlds of image blending, mosaics, and image compositing for visible band imagery. The algorithm presented in this paper is based on direct pixel-wise fusion that merges the directional discrete laplacian content of individual imagery bands rather than the intensities directly. The laplacian captures the local difference in the four-connected neighborhood. The laplacian of each image is then mixed based on the premise that image edges contain the most pertinent information from each input image. This information is then reformed into an image by solving the two-dimensional Poisson equation. The preliminary results are promising and consistent. When fusing multiple continuous visible channels, the resulting image is similar to grayscale imaging over all of the visible channels. When fusing discontinuous and/or non-visible channels, the resulting image is subtly mixed and intuitive to understand.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Laplacian based image fusion\",\"authors\":\"J. Scott, M. Pusateri\",\"doi\":\"10.1109/AIPR.2010.5759697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A fundamental goal in multispectral image fusion is to combine relevant information from multiple spectral ranges while displaying a constant amount of data as a single channel. Because we expect synergy between the views afforded by different parts of the spectrum, producing output imagery with increased information beyond any of the individual imagery sounds simple. While fusion algorithms achieve synergy under specific scenarios, it is often the case that they produce imagery with less information than any single band of imagery. Losses can arise from any number of problems including poor imagery in one band degrading the fusion result, loss of details from intrinsic smoothing, artifacts or discontinuities from discrete mixing, and distracting colors from unnatural color mapping. We have been developing and testing fusion algorithms with the goal of achieving synergy under a wider range of scenarios. This technique has been very successful in the worlds of image blending, mosaics, and image compositing for visible band imagery. The algorithm presented in this paper is based on direct pixel-wise fusion that merges the directional discrete laplacian content of individual imagery bands rather than the intensities directly. The laplacian captures the local difference in the four-connected neighborhood. The laplacian of each image is then mixed based on the premise that image edges contain the most pertinent information from each input image. This information is then reformed into an image by solving the two-dimensional Poisson equation. The preliminary results are promising and consistent. When fusing multiple continuous visible channels, the resulting image is similar to grayscale imaging over all of the visible channels. When fusing discontinuous and/or non-visible channels, the resulting image is subtly mixed and intuitive to understand.\",\"PeriodicalId\":128378,\"journal\":{\"name\":\"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2010.5759697\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2010.5759697","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A fundamental goal in multispectral image fusion is to combine relevant information from multiple spectral ranges while displaying a constant amount of data as a single channel. Because we expect synergy between the views afforded by different parts of the spectrum, producing output imagery with increased information beyond any of the individual imagery sounds simple. While fusion algorithms achieve synergy under specific scenarios, it is often the case that they produce imagery with less information than any single band of imagery. Losses can arise from any number of problems including poor imagery in one band degrading the fusion result, loss of details from intrinsic smoothing, artifacts or discontinuities from discrete mixing, and distracting colors from unnatural color mapping. We have been developing and testing fusion algorithms with the goal of achieving synergy under a wider range of scenarios. This technique has been very successful in the worlds of image blending, mosaics, and image compositing for visible band imagery. The algorithm presented in this paper is based on direct pixel-wise fusion that merges the directional discrete laplacian content of individual imagery bands rather than the intensities directly. The laplacian captures the local difference in the four-connected neighborhood. The laplacian of each image is then mixed based on the premise that image edges contain the most pertinent information from each input image. This information is then reformed into an image by solving the two-dimensional Poisson equation. The preliminary results are promising and consistent. When fusing multiple continuous visible channels, the resulting image is similar to grayscale imaging over all of the visible channels. When fusing discontinuous and/or non-visible channels, the resulting image is subtly mixed and intuitive to understand.