利用专家眼球跟踪数据和深度学习自动识别青光眼 OCT 报告中的临床相关区域。

IF 2.6 3区 医学 Q2 OPHTHALMOLOGY
Ye Tian, Anurag Sharma, Shubh Mehta, Shubham Kaushal, Jeffrey M Liebmann, George A Cioffi, Kaveri A Thakoor
{"title":"利用专家眼球跟踪数据和深度学习自动识别青光眼 OCT 报告中的临床相关区域。","authors":"Ye Tian, Anurag Sharma, Shubh Mehta, Shubham Kaushal, Jeffrey M Liebmann, George A Cioffi, Kaveri A Thakoor","doi":"10.1167/tvst.13.10.24","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To propose a deep learning-based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions.</p><p><strong>Methods: </strong>We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training.</p><p><strong>Results: </strong>Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance.</p><p><strong>Conclusions: </strong>Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible.</p><p><strong>Translational relevance: </strong>By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.</p>","PeriodicalId":23322,"journal":{"name":"Translational Vision Science & Technology","volume":"13 10","pages":"24"},"PeriodicalIF":2.6000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482640/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning.\",\"authors\":\"Ye Tian, Anurag Sharma, Shubh Mehta, Shubham Kaushal, Jeffrey M Liebmann, George A Cioffi, Kaveri A Thakoor\",\"doi\":\"10.1167/tvst.13.10.24\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To propose a deep learning-based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions.</p><p><strong>Methods: </strong>We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training.</p><p><strong>Results: </strong>Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance.</p><p><strong>Conclusions: </strong>Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible.</p><p><strong>Translational relevance: </strong>By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.</p>\",\"PeriodicalId\":23322,\"journal\":{\"name\":\"Translational Vision Science & Technology\",\"volume\":\"13 10\",\"pages\":\"24\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482640/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational Vision Science & Technology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/tvst.13.10.24\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational Vision Science & Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/tvst.13.10.24","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:提出一种基于深度学习的方法,利用眼科医生的眼球跟踪数据预测光学相干断层扫描(OCT)报告上最固定的区域,帮助他们找到医学上突出的图像区域:我们收集了眼科住院医师、研究员和教师在查看 OCT 报告以检测青光眼时的眼球跟踪数据。我们使用 U-Net 模型作为深度学习骨干,通过将输入报告划分为 11 × 11 的网格来量化眼球跟踪坐标。我们对模型进行了训练,以预测未见过的 OCT 报告中定点所在的网格。我们研究了不同变量的贡献,包括观察者的专业水平、模型结构以及训练中包含的眼球注视模式的数量:结果:我们的方法预测出了 OCT 报告中最固定的区域,精确度为 0.723,召回率为 0.562,f1 分数为 0.609。我们发现,使用基于网格的眼球跟踪结构可实现高效训练,而使用 U-Net 骨干网则可获得最佳性能:我们的方法有望通过预测 OCT 报告中最有医学价值的区域来帮助眼科医生诊断青光眼。我们的研究表明,在专家可能无法到达的情况下,眼球跟踪在引导深度学习算法到达信息区域方面具有重要价值:通过为青光眼诊断提供重要的 OCT 报告区域,我们的模型可以帮助医学教育,并作为自我监督深度学习方法的先驱,加快早期检测青光眼导致的不可逆视力损失。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning.

Purpose: To propose a deep learning-based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions.

Methods: We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training.

Results: Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance.

Conclusions: Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible.

Translational relevance: By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Translational Vision Science & Technology
Translational Vision Science & Technology Engineering-Biomedical Engineering
CiteScore
5.70
自引率
3.30%
发文量
346
审稿时长
25 weeks
期刊介绍: Translational Vision Science & Technology (TVST), an official journal of the Association for Research in Vision and Ophthalmology (ARVO), an international organization whose purpose is to advance research worldwide into understanding the visual system and preventing, treating and curing its disorders, is an online, open access, peer-reviewed journal emphasizing multidisciplinary research that bridges the gap between basic research and clinical care. A highly qualified and diverse group of Associate Editors and Editorial Board Members is led by Editor-in-Chief Marco Zarbin, MD, PhD, FARVO. The journal covers a broad spectrum of work, including but not limited to: Applications of stem cell technology for regenerative medicine, Development of new animal models of human diseases, Tissue bioengineering, Chemical engineering to improve virus-based gene delivery, Nanotechnology for drug delivery, Design and synthesis of artificial extracellular matrices, Development of a true microsurgical operating environment, Refining data analysis algorithms to improve in vivo imaging technology, Results of Phase 1 clinical trials, Reverse translational ("bedside to bench") research. TVST seeks manuscripts from scientists and clinicians with diverse backgrounds ranging from basic chemistry to ophthalmic surgery that will advance or change the way we understand and/or treat vision-threatening diseases. TVST encourages the use of color, multimedia, hyperlinks, program code and other digital enhancements.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信