基于D-STUNet和渐进关键点筛选策略的无监督视网膜图像配准。

IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Xiangyu Deng, Jiayi Kang
{"title":"基于D-STUNet和渐进关键点筛选策略的无监督视网膜图像配准。","authors":"Xiangyu Deng, Jiayi Kang","doi":"10.1088/2057-1976/ade9c6","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. Retinal image registration improves the accuracy and validity of a doctor's diagnosis and holds a crucial role in the monitoring and treatment of associated diseases. However, most existing image registration methods have limitations in identifying retinal vascular features, making it difficult to achieve desirable results in retinal image registration tasks. To solve this problem, a fusion network of Swin Transformer and U-Net, improved by Differential Multi-scale Convolutional Block Attention Module with Residual Mechanism (DMCR), named D-STUNet, is proposed in conjunction with the designed Progressive Keypoint Screening (PKS) strategy.</p><p><strong>Approach: </strong>The D-STUNet network is primarily based on an encoder-decoder framework, and employs DMCR for the improvement and fusion of the Swin Transformer and U-Net networks. Among them, the DMCR module enhances the ability to focus on retinal vascular features, which effectively improves the accuracy of retinal image registration in the event of limited data. Simultaneously, the network introduces the PKS strategy to enable the gradual accumulation of effective keypoint information in the course of the training, which ensures that the keypoints are more concentrated in the retinal vascular region, thus enhancing the matching rate and overall detection effect.</p><p><strong>Main results: </strong>The registration validation is conducted on the publicly accessible dataset Fundus Image Registration Dataset (FIRE) and compare it with nine algorithms. The experimental results show that the algorithm achieves an acceptance rate of 98.50%, a failure rate of 0, and an inaccuracy rate of 1.50%. In the area under the curve (AUC) metric, AUC for the Easy group is 0.929, while the AUC for the Mod and Hard groups are 0.883 and 0.724, respectively. The mean area under the curve (mAUC) across all comparison algorithms is the highest, outperforming the second-best algorithm by 0.09. Although it did not reach the optimum in certain subcategories (such as AUC-easy), its overall performance is significantly superior to existing methods.</p><p><strong>Significance: </strong>The proposed network is able to effectively capture local features such as complex vascular structures in retinal images, providing a new method to improve the registration accuracy of retinal images.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unsupervised retinal image registration based on D-STUNet and progressive keypoint screening strategy.\",\"authors\":\"Xiangyu Deng, Jiayi Kang\",\"doi\":\"10.1088/2057-1976/ade9c6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective</i>. Retinal image registration improves the accuracy and validity of a doctor's diagnosis and holds a crucial role in the monitoring and treatment of associated diseases. However, most existing image registration methods have limitations in identifying retinal vascular features, making it difficult to achieve desirable results in retinal image registration tasks. To solve this problem, a fusion network of Swin Transformer and U-Net, improved by Differential Multi-scale Convolutional Block Attention Module with Residual Mechanism (DMCR), named D-STUNet, is proposed in conjunction with the designed Progressive Keypoint Screening (PKS) strategy.</p><p><strong>Approach: </strong>The D-STUNet network is primarily based on an encoder-decoder framework, and employs DMCR for the improvement and fusion of the Swin Transformer and U-Net networks. Among them, the DMCR module enhances the ability to focus on retinal vascular features, which effectively improves the accuracy of retinal image registration in the event of limited data. Simultaneously, the network introduces the PKS strategy to enable the gradual accumulation of effective keypoint information in the course of the training, which ensures that the keypoints are more concentrated in the retinal vascular region, thus enhancing the matching rate and overall detection effect.</p><p><strong>Main results: </strong>The registration validation is conducted on the publicly accessible dataset Fundus Image Registration Dataset (FIRE) and compare it with nine algorithms. The experimental results show that the algorithm achieves an acceptance rate of 98.50%, a failure rate of 0, and an inaccuracy rate of 1.50%. In the area under the curve (AUC) metric, AUC for the Easy group is 0.929, while the AUC for the Mod and Hard groups are 0.883 and 0.724, respectively. The mean area under the curve (mAUC) across all comparison algorithms is the highest, outperforming the second-best algorithm by 0.09. Although it did not reach the optimum in certain subcategories (such as AUC-easy), its overall performance is significantly superior to existing methods.</p><p><strong>Significance: </strong>The proposed network is able to effectively capture local features such as complex vascular structures in retinal images, providing a new method to improve the registration accuracy of retinal images.</p>\",\"PeriodicalId\":8896,\"journal\":{\"name\":\"Biomedical Physics & Engineering Express\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2025-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Physics & Engineering Express\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2057-1976/ade9c6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ade9c6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目的:视网膜图像配准提高了医生诊断的准确性和有效性,在相关疾病的监测和治疗中具有至关重要的作用。然而,现有的大多数图像配准方法在识别视网膜血管特征方面存在局限性,使得在视网膜图像配准任务中难以获得理想的结果。为了解决这一问题,结合设计的渐进式关键点筛选(PKS)策略,提出了一种基于差分多尺度卷积块注意模块残差机制(DMCR)改进的Swin Transformer和U-Net融合网络D-STUNet。方法:D-STUNet网络主要基于编码器-解码器框架,并采用DMCR对Swin Transformer和U-Net网络进行改进和融合。其中,DMCR模块增强了对视网膜血管特征的聚焦能力,在数据有限的情况下,有效提高了视网膜图像配准的准确性。同时,网络引入PKS策略,使有效关键点信息在训练过程中逐步积累,保证关键点更加集中在视网膜血管区域,从而提高匹配率和整体检测效果。主要结果:在可公开访问的眼底图像配准数据集(FIRE)上进行配准验证,并与9种算法进行比较。实验结果表明,该算法的接受率为98.50%,不合格率为0,不准确率为1.50%。与第二优算法相比,视网膜眼底图像配准的曲线下面积(AUC)提高了0.09。本文算法的配准性能优于现有最先进的配准算法。意义:本文提出的网络能够有效捕获视网膜图像中复杂血管结构等局部特征,为提高视网膜图像的配准精度提供了一种新的方法。 。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Unsupervised retinal image registration based on D-STUNet and progressive keypoint screening strategy.

Objective. Retinal image registration improves the accuracy and validity of a doctor's diagnosis and holds a crucial role in the monitoring and treatment of associated diseases. However, most existing image registration methods have limitations in identifying retinal vascular features, making it difficult to achieve desirable results in retinal image registration tasks. To solve this problem, a fusion network of Swin Transformer and U-Net, improved by Differential Multi-scale Convolutional Block Attention Module with Residual Mechanism (DMCR), named D-STUNet, is proposed in conjunction with the designed Progressive Keypoint Screening (PKS) strategy.

Approach: The D-STUNet network is primarily based on an encoder-decoder framework, and employs DMCR for the improvement and fusion of the Swin Transformer and U-Net networks. Among them, the DMCR module enhances the ability to focus on retinal vascular features, which effectively improves the accuracy of retinal image registration in the event of limited data. Simultaneously, the network introduces the PKS strategy to enable the gradual accumulation of effective keypoint information in the course of the training, which ensures that the keypoints are more concentrated in the retinal vascular region, thus enhancing the matching rate and overall detection effect.

Main results: The registration validation is conducted on the publicly accessible dataset Fundus Image Registration Dataset (FIRE) and compare it with nine algorithms. The experimental results show that the algorithm achieves an acceptance rate of 98.50%, a failure rate of 0, and an inaccuracy rate of 1.50%. In the area under the curve (AUC) metric, AUC for the Easy group is 0.929, while the AUC for the Mod and Hard groups are 0.883 and 0.724, respectively. The mean area under the curve (mAUC) across all comparison algorithms is the highest, outperforming the second-best algorithm by 0.09. Although it did not reach the optimum in certain subcategories (such as AUC-easy), its overall performance is significantly superior to existing methods.

Significance: The proposed network is able to effectively capture local features such as complex vascular structures in retinal images, providing a new method to improve the registration accuracy of retinal images.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Physics & Engineering Express
Biomedical Physics & Engineering Express RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.80
自引率
0.00%
发文量
153
期刊介绍: BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信