Deformable registration of magnetic resonance images using unsupervised deep learning in neuro-/radiation oncology.

IF 3.3 2区 医学 Q2 ONCOLOGY
Alexander F I Osman, Kholoud S Al-Mugren, Nissren M Tamam, Bilal Shahine
{"title":"Deformable registration of magnetic resonance images using unsupervised deep learning in neuro-/radiation oncology.","authors":"Alexander F I Osman, Kholoud S Al-Mugren, Nissren M Tamam, Bilal Shahine","doi":"10.1186/s13014-024-02452-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma.</p><p><strong>Methods: </strong>This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms.</p><p><strong>Results: </strong>The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU.</p><p><strong>Conclusions: </strong>The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.</p>","PeriodicalId":49639,"journal":{"name":"Radiation Oncology","volume":null,"pages":null},"PeriodicalIF":3.3000,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11110381/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiation Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13014-024-02452-3","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma.

Methods: This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms.

Results: The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU.

Conclusions: The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.

在神经/放射肿瘤学中使用无监督深度学习对磁共振图像进行可变形配准。
目的:由于组织外观的变化,对包含病变的磁共振成像(MRI)扫描进行精确的可变形配准具有挑战性。在本文中,我们开发了一种基于卷积 U-Net 的新型自动三维(3D)可变形图像配准(ConvUNet-DIR)方法,利用无监督学习建立脑胶质瘤患者术前基线和随访 MRI 扫描之间的对应关系:这项研究涉及 160 名确诊为胶质瘤的患者在术前和随访期间获得的多参数脑部 MRI 扫描(T1、T1-对比增强、T2、FLAIR),代表 BraTS-Reg 2022 挑战数据集。ConvUNet-DIR 是一种基于深度学习的可变形配准工作流,以三维 U-Net 架构为核心,用于建立磁共振成像扫描之间的对应关系。该工作流程由三个部分组成:(1)U-Net 从成对的 MRI 扫描图像中学习特征,并估算出它们之间的映射关系;(2)网格生成器根据推导出的变形参数计算采样网格;(3)空间变换层通过使用插值法应用采样操作生成扭曲图像。网络使用相似度量作为损失函数,并使用正则化参数限制变形。该模型通过无监督学习,在训练数据集(n = 102)上使用核磁共振成像扫描对进行训练,并在验证数据集(n = 26)上进行验证,以评估其通用性。通过计算骰子得分和结构相似性指数(SSIM)定量指标,在测试集(n = 32)上对其性能进行了评估。该模型的性能还与基于学习算法的最先进 VoxelMorph(VM1 和 VM2)基线进行了比较:结果:ConvUNet-DIR 模型在进行精确的三维可变形配准方面表现出了良好的能力。它在测试集(n = 32)上的平均 Dice 得分为 0.975 ± 0.003,SSIM 为 0.908 ± 0.011。实验结果还表明,ConvUNet-DIR 在 Dice(VM1:0.969 ± 0.006 和 VM2:0.957 ± 0.008)和 SSIM(VM1:0.893 ± 0.012 和 VM2:0.857 ± 0.017)指标方面的表现优于 VoxelMorph 算法。在中央处理器上对一对核磁共振扫描进行配准所需的时间约为 1 秒:所开发的基于深度学习的模型可以在没有人工干预的情况下,对胶质瘤患者的一对三维核磁共振扫描进行端到端的可变形配准。该模型无需预先对齐和标记,就能提供准确、高效和稳健的可变形配准。它的性能优于最先进的基于VoxelMorph学习的可变形配准算法以及文献中报道的其他基于监督/非监督深度学习的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Radiation Oncology
Radiation Oncology ONCOLOGY-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
6.50
自引率
2.80%
发文量
181
审稿时长
3-6 weeks
期刊介绍: Radiation Oncology encompasses all aspects of research that impacts on the treatment of cancer using radiation. It publishes findings in molecular and cellular radiation biology, radiation physics, radiation technology, and clinical oncology.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信