Jun Liu, Nuo Shen, Wenyi Wang, Xiangyu Li, Wei Wang, Yongfeng Yuan, Ye Tian, Gongning Luo, Kuanquan Wang
{"title":"轻量级的交叉分辨率粗到精网络,用于有效的可变形医学图像配准。","authors":"Jun Liu, Nuo Shen, Wenyi Wang, Xiangyu Li, Wei Wang, Yongfeng Yuan, Ye Tian, Gongning Luo, Kuanquan Wang","doi":"10.1002/mp.17827","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n <p>Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n <p>Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n <p>We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n <p>Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 7","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration\",\"authors\":\"Jun Liu, Nuo Shen, Wenyi Wang, Xiangyu Li, Wei Wang, Yongfeng Yuan, Ye Tian, Gongning Luo, Kuanquan Wang\",\"doi\":\"10.1002/mp.17827\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n <p>Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n <p>Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n <p>We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n <p>Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 7\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17827\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17827","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration
Background
Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.
Purpose
To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.
Methods
Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.
Results
We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.
Conclusions
Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.
期刊介绍:
Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments
Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.