{"title":"An Efficiency Correlation between Various Image Fusion Techniques","authors":"S. BharaniNayagi, T. S. S. Angel","doi":"10.1142/s1469026823410109","DOIUrl":null,"url":null,"abstract":"Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"147 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Intell. Appl.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823410109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.