{"title":"TR-TransGAN: Temporal Recurrent Transformer Generative Adversarial Network for Longitudinal MRI Dataset Expansion","authors":"Chen-Chen Fan;Hongjun Yang;Liang Peng;Xiao-Hu Zhou;Shiqi Liu;Sheng Chen;Zeng-Guang Hou","doi":"10.1109/TCDS.2023.3345922","DOIUrl":null,"url":null,"abstract":"Longitudinal magnetic resonance imaging (MRI) datasets have important implications for the study of degenerative diseases because such datasets have data from multiple points in time to track disease progression. However, longitudinal datasets are often incomplete due to unexpected quits of patients. In previous work, we proposed an augmentation method temporal recurrent generative adversarial network (TR-GAN) that can complement missing session data of MRI datasets. TR-GAN uses a simple U-Net as a generator, which limits its performance. Transformers have had great success in the research of computer vision and this article attempts to introduce it into longitudinal dataset completion tasks. The multihead attention mechanism in transformer has huge memory requirements, and it is difficult to train 3-D MRI data on graphics processing units (GPUs) with small memory. To build a memory-friendly transformer-based generator, we introduce a Hilbert transform module (HTM) to convert 3-D data to 2-D data that preserves locality fairly well. To make up for the insufficiency of convolutional neural network (CNN)-based models that are difficult to establish long-range dependencies, we propose an Swin transformer-based up/down sampling module (STU/STD) module that combines the Swin transformer module and CNN module to capture global and local information simultaneously. Extensive experiments show that our model can reduce mean squared error (MMSE) by at least 7.16% compared to the previous state-of-the-art method.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive and Developmental Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10384475/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Longitudinal magnetic resonance imaging (MRI) datasets have important implications for the study of degenerative diseases because such datasets have data from multiple points in time to track disease progression. However, longitudinal datasets are often incomplete due to unexpected quits of patients. In previous work, we proposed an augmentation method temporal recurrent generative adversarial network (TR-GAN) that can complement missing session data of MRI datasets. TR-GAN uses a simple U-Net as a generator, which limits its performance. Transformers have had great success in the research of computer vision and this article attempts to introduce it into longitudinal dataset completion tasks. The multihead attention mechanism in transformer has huge memory requirements, and it is difficult to train 3-D MRI data on graphics processing units (GPUs) with small memory. To build a memory-friendly transformer-based generator, we introduce a Hilbert transform module (HTM) to convert 3-D data to 2-D data that preserves locality fairly well. To make up for the insufficiency of convolutional neural network (CNN)-based models that are difficult to establish long-range dependencies, we propose an Swin transformer-based up/down sampling module (STU/STD) module that combines the Swin transformer module and CNN module to capture global and local information simultaneously. Extensive experiments show that our model can reduce mean squared error (MMSE) by at least 7.16% compared to the previous state-of-the-art method.
期刊介绍:
The IEEE Transactions on Cognitive and Developmental Systems (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.