Attention-Enhanced Guided Multimodal and Semi-Supervised Networks for Visual Acuity (VA) Prediction after Anti-VEGF Therapy

IF 2.6 3区 工程技术 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Yizhen Wang , Yaqi Wang, Xianwen Liu, Weiwei Cui, Peng Jin, Yuxia Cheng, Gangyong Jia
{"title":"Attention-Enhanced Guided Multimodal and Semi-Supervised Networks for Visual Acuity (VA) Prediction after Anti-VEGF Therapy","authors":"Yizhen Wang , Yaqi Wang, Xianwen Liu, Weiwei Cui, Peng Jin, Yuxia Cheng, Gangyong Jia","doi":"10.3390/electronics13183701","DOIUrl":null,"url":null,"abstract":"The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current automated prediction methods often require human intervention and have poor interpretability, making it difficult to be widely applied in telemedicine scenarios. Therefore, an efficient, automated prediction model with good interpretability is urgently needed to improve the treatment outcomes of DME patients in telemedicine settings. In this study, we propose a multimodal algorithm based on a semi-supervised learning framework, which aims to combine optical coherence tomography (OCT) images and clinical data to automatically predict the VA values of patients after anti-VEGF treatment. Our approach first performs retinal segmentation of OCT images via a semi-supervised learning framework, which in turn extracts key biomarkers such as central retinal thickness (CST). Subsequently, these features are combined with the patient’s clinical data and fed into a multimodal learning algorithm for VA prediction. Our model performed well in the Asia Pacific Tele-Ophthalmology Society (APTOS) Big Data Competition, earning fifth place in the overall score and third place in VA prediction accuracy. Retinal segmentation achieved an accuracy of 99.03 ± 0.19% on the HZO dataset. This multimodal algorithmic framework is important in the context of telemedicine, especially for the treatment of DME patients.","PeriodicalId":11646,"journal":{"name":"Electronics","volume":"39 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/electronics13183701","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The development of telemedicine technology has provided new avenues for the diagnosis and treatment of patients with DME, especially after anti-vascular endothelial growth factor (VEGF) therapy, and accurate prediction of patients’ visual acuity (VA) is important for optimizing follow-up treatment plans. However, current automated prediction methods often require human intervention and have poor interpretability, making it difficult to be widely applied in telemedicine scenarios. Therefore, an efficient, automated prediction model with good interpretability is urgently needed to improve the treatment outcomes of DME patients in telemedicine settings. In this study, we propose a multimodal algorithm based on a semi-supervised learning framework, which aims to combine optical coherence tomography (OCT) images and clinical data to automatically predict the VA values of patients after anti-VEGF treatment. Our approach first performs retinal segmentation of OCT images via a semi-supervised learning framework, which in turn extracts key biomarkers such as central retinal thickness (CST). Subsequently, these features are combined with the patient’s clinical data and fed into a multimodal learning algorithm for VA prediction. Our model performed well in the Asia Pacific Tele-Ophthalmology Society (APTOS) Big Data Competition, earning fifth place in the overall score and third place in VA prediction accuracy. Retinal segmentation achieved an accuracy of 99.03 ± 0.19% on the HZO dataset. This multimodal algorithmic framework is important in the context of telemedicine, especially for the treatment of DME patients.
用于预测抗血管内皮生长因子疗法后视力 (VA) 的注意力增强型多模态和半监督引导网络
远程医疗技术的发展为眼底病患者的诊断和治疗提供了新的途径,尤其是在抗血管内皮生长因子(VEGF)治疗后,准确预测患者的视力(VA)对于优化后续治疗计划非常重要。然而,目前的自动预测方法往往需要人工干预,可解释性差,因此难以广泛应用于远程医疗场景。因此,迫切需要一种高效、可解释性强的自动预测模型,以改善远程医疗环境下 DME 患者的治疗效果。在本研究中,我们提出了一种基于半监督学习框架的多模态算法,旨在结合光学相干断层扫描(OCT)图像和临床数据,自动预测抗血管内皮生长因子治疗后患者的视力值。我们的方法首先通过半监督学习框架对 OCT 图像进行视网膜分割,进而提取视网膜中央厚度(CST)等关键生物标志物。随后,将这些特征与患者的临床数据相结合,并输入多模态学习算法,进行视网膜病变预测。我们的模型在亚太远程眼科协会(APTOS)大数据竞赛中表现出色,获得了总分第五名和视网膜缺损预测准确率第三名的好成绩。在 HZO 数据集上,视网膜分割的准确率达到 99.03 ± 0.19%。这种多模态算法框架在远程医疗方面非常重要,尤其是在治疗重度视网膜病变患者方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Electronics
Electronics Computer Science-Computer Networks and Communications
CiteScore
1.10
自引率
10.30%
发文量
3515
审稿时长
16.71 days
期刊介绍: Electronics (ISSN 2079-9292; CODEN: ELECGJ) is an international, open access journal on the science of electronics and its applications published quarterly online by MDPI.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信