阿尔茨海默病中的彩色眼底摄影和深度学习应用

Oana M. Dumitrascu MD, MSc , Xin Li MS , Wenhui Zhu MS , Bryan K. Woodruff MD , Simona Nikolova PhD , Jacob Sobczak , Amal Youssef MD , Siddhant Saxena , Janine Andreev , Richard J. Caselli MD , John J. Chen MD, PhD , Yalin Wang PhD
{"title":"阿尔茨海默病中的彩色眼底摄影和深度学习应用","authors":"Oana M. Dumitrascu MD, MSc ,&nbsp;Xin Li MS ,&nbsp;Wenhui Zhu MS ,&nbsp;Bryan K. Woodruff MD ,&nbsp;Simona Nikolova PhD ,&nbsp;Jacob Sobczak ,&nbsp;Amal Youssef MD ,&nbsp;Siddhant Saxena ,&nbsp;Janine Andreev ,&nbsp;Richard J. Caselli MD ,&nbsp;John J. Chen MD, PhD ,&nbsp;Yalin Wang PhD","doi":"10.1016/j.mcpdig.2024.08.005","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).</p></div><div><h3>Patients and Methods</h3><p>Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net–based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models’ performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features.</p></div><div><h3>Results</h3><p>The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; <em>P</em>&lt;.001) and our institutional testing data sets (98.90% vs 94.17%; <em>P</em>=.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making.</p></div><div><h3>Conclusion</h3><p>A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net–pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.</p></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"2 4","pages":"Pages 548-558"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949761224000804/pdfft?md5=727bbdca5e1469575c30a5949adff677&pid=1-s2.0-S2949761224000804-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Color Fundus Photography and Deep Learning Applications in Alzheimer Disease\",\"authors\":\"Oana M. Dumitrascu MD, MSc ,&nbsp;Xin Li MS ,&nbsp;Wenhui Zhu MS ,&nbsp;Bryan K. Woodruff MD ,&nbsp;Simona Nikolova PhD ,&nbsp;Jacob Sobczak ,&nbsp;Amal Youssef MD ,&nbsp;Siddhant Saxena ,&nbsp;Janine Andreev ,&nbsp;Richard J. Caselli MD ,&nbsp;John J. Chen MD, PhD ,&nbsp;Yalin Wang PhD\",\"doi\":\"10.1016/j.mcpdig.2024.08.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><p>To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).</p></div><div><h3>Patients and Methods</h3><p>Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net–based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models’ performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features.</p></div><div><h3>Results</h3><p>The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; <em>P</em>&lt;.001) and our institutional testing data sets (98.90% vs 94.17%; <em>P</em>=.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making.</p></div><div><h3>Conclusion</h3><p>A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net–pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.</p></div>\",\"PeriodicalId\":74127,\"journal\":{\"name\":\"Mayo Clinic Proceedings. Digital health\",\"volume\":\"2 4\",\"pages\":\"Pages 548-558\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949761224000804/pdfft?md5=727bbdca5e1469575c30a5949adff677&pid=1-s2.0-S2949761224000804-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mayo Clinic Proceedings. Digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949761224000804\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mayo Clinic Proceedings. Digital health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949761224000804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

患者和方法在 2021 年 4 月 1 日至 2024 年 1 月 30 日期间,我们使用两个独立数据集(英国生物库和我们的三级学术机构)的高质量视网膜照片构建了两个深度学习模型,这些数据集分别来自 AD 患者和对照组。ADVAS 是一种基于 U-Net 的架构,使用视网膜血管分割。ADRET 是一种双向编码器表征,来自变压器风格的自监督学习卷积神经网络,在英国生物库的视网膜彩色照片大数据集上进行了预训练。利用平均准确率、灵敏度、特异性和接收操作曲线确定了模型区分注意力缺失症与非注意力缺失症的性能。结果在英国生物库(98.27% vs 77.20%; P<.001)和我们的机构测试数据集(98.90% vs 94.17%; P=.04)中,自监督 ADRET 模型与 ADVAS 相比具有更高的准确性。在原始血管分割与二元血管分割之间,以及双眼模型与单眼模型之间,均未发现重大差异。从注意力缺失症患者身上获得的注意力热图突出显示了小血管分支周围的区域,这些区域与模型决策的相关性最高。 结论:仅用视网膜彩色照片的大型数据集预处理变压器式自监督卷积神经网络的双向编码器表征,就能高精度筛查有症状的注意力缺失症,其效果优于 U-Net 预处理模型。要将这一方法应用于临床实践,还需要在更多不同人群中进行进一步验证,并采用综合技术来协调眼底照片和减弱成像相关噪声。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Color Fundus Photography and Deep Learning Applications in Alzheimer Disease

Objective

To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).

Patients and Methods

Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net–based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models’ performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features.

Results

The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing data sets (98.90% vs 94.17%; P=.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making.

Conclusion

A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net–pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mayo Clinic Proceedings. Digital health
Mayo Clinic Proceedings. Digital health Medicine and Dentistry (General), Health Informatics, Public Health and Health Policy
自引率
0.00%
发文量
0
审稿时长
47 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信