{"title":"VDMUFusion:一种基于扩散模型的多用途无监督图像融合框架","authors":"Yu Shi;Yu Liu;Juan Cheng;Z. Jane Wang;Xun Chen","doi":"10.1109/TIP.2024.3512365","DOIUrl":null,"url":null,"abstract":"Image fusion facilitates the integration of information from various source images of the same scene into a composite image, thereby benefiting perception, analysis, and understanding. Recently, diffusion models have demonstrated impressive generative capabilities in the field of computer vision, suggesting significant potential for application in image fusion. The forward process in the diffusion models requires the gradual addition of noise to the original data. However, typical unsupervised image fusion tasks (e.g., infrared-visible, medical, and multi-exposure image fusion) lack ground truth images (corresponding to the original data in diffusion models), thereby preventing the direct application of the diffusion models. To address this problem, we propose a versatile diffusion model-based unsupervised framework for image fusion, termed as VDMUFusion. In the proposed method, we integrate the fusion problem into the diffusion sampling process by formulating image fusion as a weighted average process and establishing appropriate assumptions about the noise in the diffusion model. To simplify the training process, we propose a multi-task learning framework that replaces the original noise prediction network, allowing for simultaneous prediction of noise and fusion weights. Meanwhile, our method employs joint training across various fusion tasks, which significantly improves noise prediction accuracy and yields higher quality fused images compared to training on a single task. Extensive experimental results demonstrate that the proposed method delivers very competitive performance across various image fusion tasks. The code is available at <uri>https://github.com/yuliu316316/VDMUFusion</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"441-454"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VDMUFusion: A Versatile Diffusion Model-Based Unsupervised Framework for Image Fusion\",\"authors\":\"Yu Shi;Yu Liu;Juan Cheng;Z. Jane Wang;Xun Chen\",\"doi\":\"10.1109/TIP.2024.3512365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image fusion facilitates the integration of information from various source images of the same scene into a composite image, thereby benefiting perception, analysis, and understanding. Recently, diffusion models have demonstrated impressive generative capabilities in the field of computer vision, suggesting significant potential for application in image fusion. The forward process in the diffusion models requires the gradual addition of noise to the original data. However, typical unsupervised image fusion tasks (e.g., infrared-visible, medical, and multi-exposure image fusion) lack ground truth images (corresponding to the original data in diffusion models), thereby preventing the direct application of the diffusion models. To address this problem, we propose a versatile diffusion model-based unsupervised framework for image fusion, termed as VDMUFusion. In the proposed method, we integrate the fusion problem into the diffusion sampling process by formulating image fusion as a weighted average process and establishing appropriate assumptions about the noise in the diffusion model. To simplify the training process, we propose a multi-task learning framework that replaces the original noise prediction network, allowing for simultaneous prediction of noise and fusion weights. Meanwhile, our method employs joint training across various fusion tasks, which significantly improves noise prediction accuracy and yields higher quality fused images compared to training on a single task. Extensive experimental results demonstrate that the proposed method delivers very competitive performance across various image fusion tasks. The code is available at <uri>https://github.com/yuliu316316/VDMUFusion</uri>.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"441-454\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10794610/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10794610/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VDMUFusion: A Versatile Diffusion Model-Based Unsupervised Framework for Image Fusion
Image fusion facilitates the integration of information from various source images of the same scene into a composite image, thereby benefiting perception, analysis, and understanding. Recently, diffusion models have demonstrated impressive generative capabilities in the field of computer vision, suggesting significant potential for application in image fusion. The forward process in the diffusion models requires the gradual addition of noise to the original data. However, typical unsupervised image fusion tasks (e.g., infrared-visible, medical, and multi-exposure image fusion) lack ground truth images (corresponding to the original data in diffusion models), thereby preventing the direct application of the diffusion models. To address this problem, we propose a versatile diffusion model-based unsupervised framework for image fusion, termed as VDMUFusion. In the proposed method, we integrate the fusion problem into the diffusion sampling process by formulating image fusion as a weighted average process and establishing appropriate assumptions about the noise in the diffusion model. To simplify the training process, we propose a multi-task learning framework that replaces the original noise prediction network, allowing for simultaneous prediction of noise and fusion weights. Meanwhile, our method employs joint training across various fusion tasks, which significantly improves noise prediction accuracy and yields higher quality fused images compared to training on a single task. Extensive experimental results demonstrate that the proposed method delivers very competitive performance across various image fusion tasks. The code is available at https://github.com/yuliu316316/VDMUFusion.