Senrong You;Bin Yuan;Zhihan Lyu;Charles K. Chui;C. L. Philip Chen;Baiying Lei;Shuqiang Wang
{"title":"Generative AI Enables Synthesizing Cross-Modality Brain Image via Multi-Level-Latent Representation Learning","authors":"Senrong You;Bin Yuan;Zhihan Lyu;Charles K. Chui;C. L. Philip Chen;Baiying Lei;Shuqiang Wang","doi":"10.1109/TCI.2024.3434724","DOIUrl":null,"url":null,"abstract":"Multiple brain imaging modalities can provide complementary pathologic information for clinical diagnosis. However, it is huge challenge to acquire enough modalities in clinical practice. In this work, a cross-modality reconstruction model, called fine-grain aware generative adversarial network (FA-GAN), is proposed to reconstruct the target modality images of brain from the 2D source modality images with a dual-stages manner. The FA-GAN is able to mine the multi-level shared latent representations from the source modality images and then reconstruct the target modality image from coarse to fine progressively. Specifically, in the coarse stage, the Multi-Grain Extractor firstly extracts and disentangles the shared latent features from the source modality images, and synthesizes the coarse target modality images with a pyramidal network. The Feature-Joint Encoder then encodes the latent features and frequency features jointly. In the fine stage, the Fine-Texture Generator is fed with the joint codes to fine tune the reconstruction of the fine-grained target modality. The wavelet transformation module is employed to extract the frequency codes and guide the Fine-Texture Generator to synthesize finer textures. Comprehensive experiments from MR to PET images on ADNI datasets demonstrate that the proposed model achieves finer structure recovery and outperforms the competing methods quantitatively and qualitatively.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1152-1164"},"PeriodicalIF":4.2000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10614117/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multiple brain imaging modalities can provide complementary pathologic information for clinical diagnosis. However, it is huge challenge to acquire enough modalities in clinical practice. In this work, a cross-modality reconstruction model, called fine-grain aware generative adversarial network (FA-GAN), is proposed to reconstruct the target modality images of brain from the 2D source modality images with a dual-stages manner. The FA-GAN is able to mine the multi-level shared latent representations from the source modality images and then reconstruct the target modality image from coarse to fine progressively. Specifically, in the coarse stage, the Multi-Grain Extractor firstly extracts and disentangles the shared latent features from the source modality images, and synthesizes the coarse target modality images with a pyramidal network. The Feature-Joint Encoder then encodes the latent features and frequency features jointly. In the fine stage, the Fine-Texture Generator is fed with the joint codes to fine tune the reconstruction of the fine-grained target modality. The wavelet transformation module is employed to extract the frequency codes and guide the Fine-Texture Generator to synthesize finer textures. Comprehensive experiments from MR to PET images on ADNI datasets demonstrate that the proposed model achieves finer structure recovery and outperforms the competing methods quantitatively and qualitatively.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.