眼线笔:使用眼底地标进行纵向图像配准的深度学习管道。

IF 3.2 Q1 OPHTHALMOLOGY
Yoga Advaith Veturi MSc , Steve McNamara OD , Scott Kinder MS, Christopher William Clark MS, Upasana Thakuria MS, Benjamin Bearce MS, Niranjan Manoharan MD, Naresh Mandava MD, Malik Y. Kahook MD, Praveer Singh PhD, Jayashree Kalpathy-Cramer PhD
{"title":"眼线笔:使用眼底地标进行纵向图像配准的深度学习管道。","authors":"Yoga Advaith Veturi MSc ,&nbsp;Steve McNamara OD ,&nbsp;Scott Kinder MS,&nbsp;Christopher William Clark MS,&nbsp;Upasana Thakuria MS,&nbsp;Benjamin Bearce MS,&nbsp;Niranjan Manoharan MD,&nbsp;Naresh Mandava MD,&nbsp;Malik Y. Kahook MD,&nbsp;Praveer Singh PhD,&nbsp;Jayashree Kalpathy-Cramer PhD","doi":"10.1016/j.xops.2024.100664","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.</div></div><div><h3>Design</h3><div>EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.</div></div><div><h3>Participants</h3><div>We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).</div></div><div><h3>Methods</h3><div>Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.</div></div><div><h3>Main Outcome Measures</h3><div>We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.</div></div><div><h3>Results</h3><div>EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC<sub>FIRE</sub> = 0.76, AUC<sub>CORIS</sub> = 0.83, AUC<sub>SIGF</sub> = 0.74).</div></div><div><h3>Conclusions</h3><div>Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100664"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773051/pdf/","citationCount":"0","resultStr":"{\"title\":\"EyeLiner\",\"authors\":\"Yoga Advaith Veturi MSc ,&nbsp;Steve McNamara OD ,&nbsp;Scott Kinder MS,&nbsp;Christopher William Clark MS,&nbsp;Upasana Thakuria MS,&nbsp;Benjamin Bearce MS,&nbsp;Niranjan Manoharan MD,&nbsp;Naresh Mandava MD,&nbsp;Malik Y. Kahook MD,&nbsp;Praveer Singh PhD,&nbsp;Jayashree Kalpathy-Cramer PhD\",\"doi\":\"10.1016/j.xops.2024.100664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><div>Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.</div></div><div><h3>Design</h3><div>EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.</div></div><div><h3>Participants</h3><div>We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).</div></div><div><h3>Methods</h3><div>Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.</div></div><div><h3>Main Outcome Measures</h3><div>We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.</div></div><div><h3>Results</h3><div>EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC<sub>FIRE</sub> = 0.76, AUC<sub>CORIS</sub> = 0.83, AUC<sub>SIGF</sub> = 0.74).</div></div><div><h3>Conclusions</h3><div>Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>\",\"PeriodicalId\":74363,\"journal\":{\"name\":\"Ophthalmology science\",\"volume\":\"5 2\",\"pages\":\"Article 100664\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773051/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmology science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666914524002008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524002008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:在青光眼、黄斑变性等慢性眼病中,检测和测量眼底纵向成像的变化是监测疾病进展的关键。临床医生通过独立审查或手动并列纵向获得的彩色眼底照片(CFPs)来评估疾病状态的变化。区分由于相机方向、变焦和曝光引起的图像采集变化与真正的疾病相关变化是具有挑战性的。这使得人工图像评估变得可变和主观,可能影响临床决策。我们介绍了我们的深度学习(DL)管道,“眼线”,用于注册或对齐二维cfp。纵向图像对的改进对齐可以补偿由于相机方向的差异,同时保留病理变化。设计:眼线笔使用基于dl的关键点匹配算法将“移动”图像注册为“固定”图像。参与者:我们在3个纵向数据集上对EyeLiner进行评估:眼底图像配准(FIRE)、青光眼预测序列图像(SIGF)和来自科罗拉多眼科研究信息系统(CORIS)的内部青光眼数据集。方法:利用卷积神经网络从运动图像和固定图像中检测视网膜血管的解剖关键点,然后利用基于变压器的算法进行匹配。最后,利用相应的关键点学习变换参数。主要结果测量:我们计算了从固定图像和注册的运动图像手动注释的关键点之间的平均距离(MD)。为了与现有的最先进的视网膜配准方法进行比较,我们使用了FIRE数据集研究中引入的曲线下平均面积(AUC)度量。结果:眼线有效地对齐纵向图像对从FIRE, SIGF,和CORIS,定性评估通过注册棋盘和闪烁动画。定量结果表明,该模型在FIRE、CORIS和SIGF校正后的MD分别从321.32、9.86和2.03像素下降到3.74、25.23和5.94像素。我们还获得了FIRE、CORIS和SIGF的AUC分别为0.85、0.94和0.84,超过了目前最先进的SuperRetina (AUCFIRE = 0.76, AUCCORIS = 0.83, AUCSIGF = 0.74)。结论:与目前最先进的方法在3个独立的数据集上相比,我们的管道展示了改进的图像对对齐。我们设想,这种方法将使临床医生对齐图像对,并更好地可视化疾病随时间的变化。财务披露:专有或商业披露可在本文末尾的脚注和披露中找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EyeLiner

Objective

Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.

Design

EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.

Participants

We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).

Methods

Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.

Main Outcome Measures

We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.

Results

EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUCFIRE = 0.76, AUCCORIS = 0.83, AUCSIGF = 0.74).

Conclusions

Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ophthalmology science
Ophthalmology science Ophthalmology
CiteScore
3.40
自引率
0.00%
发文量
0
审稿时长
89 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信