{"title":"CGNet: A Correlation-Guided Registration Network for Unsupervised Deformable Image Registration","authors":"Yuan Chang;Zheng Li;Wenzheng Xu","doi":"10.1109/TMI.2024.3505853","DOIUrl":null,"url":null,"abstract":"Deformable medical image registration plays a significant role in medical image analysis. With the advancement of deep neural networks, learning-based deformable registration methods have made great strides due to their ability to perform fast end-to-end registration and their competitive performance compared to traditional methods. However, these methods primarily improve registration performance by replacing specific layers of the encoder-decoder architecture designed for segmentation tasks with advanced network structures like Transformers, overlooking the crucial difference between these two tasks, which is feature matching. In this paper, we propose a novel correlation-guided registration network (CGNet) specifically designed for deformable medical image registration tasks, which achieves a reasonable and accurate registration through three main components: dual-stream encoder, correlation learning module, and coarse-to-fine decoder. Specifically, the dual-stream encoder is used to independently extract hierarchical features from a moving image and a fixed image. The correlation learning module is used to calculate correlation maps, enabling explicit feature matching between input image pairs. The coarse-to-fine decoder outputs deformation sub-fields for each decoding layer in a coarse-to-fine manner, facilitating accurate estimation of the final deformation field. Extensive experiments on four 3D brain MRI datasets show that the proposed method achieves state-of-the-art performance on three evaluation metrics compared to twelve learning-based registration methods, demonstrating the potential of our model for deformable medical image registration.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1468-1479"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10767310/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deformable medical image registration plays a significant role in medical image analysis. With the advancement of deep neural networks, learning-based deformable registration methods have made great strides due to their ability to perform fast end-to-end registration and their competitive performance compared to traditional methods. However, these methods primarily improve registration performance by replacing specific layers of the encoder-decoder architecture designed for segmentation tasks with advanced network structures like Transformers, overlooking the crucial difference between these two tasks, which is feature matching. In this paper, we propose a novel correlation-guided registration network (CGNet) specifically designed for deformable medical image registration tasks, which achieves a reasonable and accurate registration through three main components: dual-stream encoder, correlation learning module, and coarse-to-fine decoder. Specifically, the dual-stream encoder is used to independently extract hierarchical features from a moving image and a fixed image. The correlation learning module is used to calculate correlation maps, enabling explicit feature matching between input image pairs. The coarse-to-fine decoder outputs deformation sub-fields for each decoding layer in a coarse-to-fine manner, facilitating accurate estimation of the final deformation field. Extensive experiments on four 3D brain MRI datasets show that the proposed method achieves state-of-the-art performance on three evaluation metrics compared to twelve learning-based registration methods, demonstrating the potential of our model for deformable medical image registration.