Dhruvi Shah, Hareshwar Wani, M. Das, Deep Gupta, P. Radeva, Ashwini M. Bakde
{"title":"STPGANsFusion: Structure and Texture Preserving Generative Adversarial Networks for Multi-modal Medical Image Fusion","authors":"Dhruvi Shah, Hareshwar Wani, M. Das, Deep Gupta, P. Radeva, Ashwini M. Bakde","doi":"10.1109/NCC55593.2022.9806733","DOIUrl":null,"url":null,"abstract":"Medical images from various modalities carry diverse information. The features from these source images are combined into a single image, constituting more information content, beneficial for subsequent medical applications. Recently, deep learning (DL) based networks have demonstrated the ability to produce promising fusion results by integrating the feature extraction and preservation task with less manual interventions. However, using a single network for extracting features from multi-modal source images characterizing distinct information results in the loss of crucial diagnostic information. Addressing this problem, we present structure and texture preserving generative adversarial networks based medical image fusion method (STPGANsFusion). Initially, the textural and structural components of the source images are separated using structure gradient and texture decorrelating regularizer (SGTDR) based image decomposition for more complementary information preservation and higher robustness for the model. Next, the fusion of the structure and the texture components is carried out using two generative adversarial networks (GANs) consisting of a generator and two discriminators to get fused structure and texture components. The loss function for each GAN is framed as per the characteristic of the component being fused to minimize the loss of complementary information. The fused image is reconstructed and undergoes adaptive mask-based structure enhancement to further boost its contrast and visualization. Substantial experimentation is carried out on a wide variety of neurological images. Visual and qualitative results exhibit notable improvement in the fusion performance of the proposed method in comparison to the state-of-the-art fusion methods.","PeriodicalId":403870,"journal":{"name":"2022 National Conference on Communications (NCC)","volume":"32 4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC55593.2022.9806733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Medical images from various modalities carry diverse information. The features from these source images are combined into a single image, constituting more information content, beneficial for subsequent medical applications. Recently, deep learning (DL) based networks have demonstrated the ability to produce promising fusion results by integrating the feature extraction and preservation task with less manual interventions. However, using a single network for extracting features from multi-modal source images characterizing distinct information results in the loss of crucial diagnostic information. Addressing this problem, we present structure and texture preserving generative adversarial networks based medical image fusion method (STPGANsFusion). Initially, the textural and structural components of the source images are separated using structure gradient and texture decorrelating regularizer (SGTDR) based image decomposition for more complementary information preservation and higher robustness for the model. Next, the fusion of the structure and the texture components is carried out using two generative adversarial networks (GANs) consisting of a generator and two discriminators to get fused structure and texture components. The loss function for each GAN is framed as per the characteristic of the component being fused to minimize the loss of complementary information. The fused image is reconstructed and undergoes adaptive mask-based structure enhancement to further boost its contrast and visualization. Substantial experimentation is carried out on a wide variety of neurological images. Visual and qualitative results exhibit notable improvement in the fusion performance of the proposed method in comparison to the state-of-the-art fusion methods.