Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration.

Medical physics Pub Date : 2025-04-25 DOI:10.1002/mp.17827
Jun Liu, Nuo Shen, Wenyi Wang, Xiangyu Li, Wei Wang, Yongfeng Yuan, Ye Tian, Gongning Luo, Kuanquan Wang
{"title":"Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration.","authors":"Jun Liu, Nuo Shen, Wenyi Wang, Xiangyu Li, Wei Wang, Yongfeng Yuan, Ye Tian, Gongning Luo, Kuanquan Wang","doi":"10.1002/mp.17827","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.</p><p><strong>Purpose: </strong>To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.</p><p><strong>Methods: </strong>Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.</p><p><strong>Results: </strong>We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.</p><p><strong>Conclusions: </strong>Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/mp.17827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.

Purpose: To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.

Methods: Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.

Results: We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.

Conclusions: Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.

轻量级的交叉分辨率粗到精网络,用于有效的可变形医学图像配准。
背景:准确、高效的形变医学图像配准是医学图像分析的关键。虽然最近基于深度学习的配准方法已经达到了最先进的精度,但它们经常受到广泛的网络参数和缓慢的推理时间的影响,从而导致效率低下。减少模型大小和输入分辨率的努力可以提高计算效率,但往往导致次优精度。目的:为了解决高精度和效率之间的权衡,我们提出了一种轻量级的交叉分辨率粗到精配准框架,称为LightCRCF。方法:我们的方法建立在一个只有10万个参数的超轻量级U-Net架构上,提供了显著的效率。为了减轻参数较少导致的精度下降,同时保持网络的轻量化特性,LightCRCF引入了以下三个关键创新:(1)选择有效的跨分辨率粗到细(C2F)配准策略,并将其集成到轻量化网络中,逐步将变形场分解为多分辨率子场,以捕获细粒度变形;(2)纹理感知重参数化(TaRep)模块,该模块结合Sobel算子和拉普拉斯算子提取丰富的纹理信息;(3)组流重参数化(Group-flow Reparameterization, GfRep)模块,该模块通过将变形场分解成多个组来捕获不同的变形模式。此外,我们引入了一种结构重参数化技术,该技术通过TaRep和GfRep模块的多分支结构来提高训练精度,同时通过将这些多分支结构等效转换为单分支标准卷积来保持有效的推理。结果:我们在三个公开的MRI数据集(LPBA、OASIS和ACDC)和一个CT数据集(腹部CT)上对LightCRCF进行了不同方法的评估。按照之前的数据分割方法,LPBA数据集包括30对训练图像对和9对测试图像对。对于OASIS数据集,训练、验证和测试数据分别由1275、110和660对图像组成。同样,对于ACDC数据集,训练、验证和测试数据分别包括180、20和100个图像对。对于腹部CT数据集的主体间注册,有380对训练对、6对验证对和42对测试对。与最先进的C2F方法相比,LightCRCF实现了相当的精度分数(DSC, HD95和MSE),同时在所有效率指标(Params, VRAM, FLOPs和推理时间)上表现出显着的卓越性能。相对于效率优先的方法,LightCRCF在精度指标上明显优于这些方法。结论:我们的LightCRCF方法在准确性和效率之间取得了良好的平衡,在保持高准确性的同时获得了优越的效率,从而突出了其临床应用潜力。代码可在https://github.com/PerceptionComputingLab/LightCRCF上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信