Leveraging Artificial Intelligence and Clinical Laboratory Evidence to Advance Mobile Health Applications in Ophthalmology: Taking the Ocular Surface Disease as a Case Study

iLABMED Pub Date : 2025-03-12 DOI:10.1002/ila2.70001
Mini Han Wang, Yi Pan, Xudong Jiang, Zhiyuan Lin, Haoyang Liu, Yunxiao Liu, Jiazheng Cui, Jiaxiang Tan, Chengqi Gong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Moawiya Haddad, Marion Schindler, José Lopes Camilo Da Costa Alves, Junbin Fang, Xiangrong Yu, Kelvin Kam-Lung Chong
{"title":"Leveraging Artificial Intelligence and Clinical Laboratory Evidence to Advance Mobile Health Applications in Ophthalmology: Taking the Ocular Surface Disease as a Case Study","authors":"Mini Han Wang,&nbsp;Yi Pan,&nbsp;Xudong Jiang,&nbsp;Zhiyuan Lin,&nbsp;Haoyang Liu,&nbsp;Yunxiao Liu,&nbsp;Jiazheng Cui,&nbsp;Jiaxiang Tan,&nbsp;Chengqi Gong,&nbsp;Guanghui Hou,&nbsp;Xiaoxiao Fang,&nbsp;Yang Yu,&nbsp;Moawiya Haddad,&nbsp;Marion Schindler,&nbsp;José Lopes Camilo Da Costa Alves,&nbsp;Junbin Fang,&nbsp;Xiangrong Yu,&nbsp;Kelvin Kam-Lung Chong","doi":"10.1002/ila2.70001","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.</p>\n </section>\n </div>","PeriodicalId":100656,"journal":{"name":"iLABMED","volume":"3 1","pages":"64-85"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ila2.70001","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"iLABMED","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ila2.70001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background

The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.

Methods

Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).

Results

The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.

Conclusions

Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.

Abstract Image

利用人工智能和临床实验室证据推进眼科移动医疗应用:以眼表疾病为例研究
移动医疗(mHealth)应用程序的出现为远程诊断、监测和治疗提供了前所未有的机会,从根本上改变了医疗保健领域,特别是在眼科领域。眼表疾病,包括干眼病(DED),是移动健康应用程序可以检测到的最常见的眼病。然而,大多数用于眼表疾病检测的远程人工智能(AI)系统主要基于通过访谈收集的自我报告数据,缺乏临床证据的严密性。这些制约因素突出表明,需要制定健全的、以证据为基础的人工智能框架,将客观健康指标纳入其中,以提高远程医疗应用的可靠性和临床效用。方法建立两种新型深度学习(DL)模型YoloTR和YoloMBTR,用于检测眼表关键指标(OSIs),包括泪膜半月板高度(TMH)、无创角膜镜破裂时间(NIKBUT)、眼红肿、脂质层和眼睫。此外,采用反向传播神经网络(BPNN)和通用图像分割网络(U-Net)对睑板腺图像进行分类和分割,预测蠕形螨感染。这些模型在来自高分辨率设备的大型数据集上进行训练,包括Keratograph 5M和各种移动平台(华为、苹果和b小米)。结果提出的深度学习模型YoloMBTR和YoloTR在多个性能指标上优于基线你只看一次(YOLO)模型(Yolov5n, Yolov6n和Yolov8n),包括测试平均精度(AP),验证AP和总体精度。在针对黄金标准进行基准测试时,与KG5M中的机器插件模型相比,这两个模型还表现出卓越的性能。使用Python的Matplotlib进行可视化和SPSS进行统计分析,本研究引入了一个创新的概念验证框架,利用定量人工智能分析来解决眼科中的关键挑战。通过集成先进的深度学习模型,该框架提供了一种强大的方法,可以高精度地检测和量化sis。这种方法上的进步通过将复杂的眼科数据转化为可操作的见解,弥合了人工智能驱动的诊断与临床眼科之间的差距。将人工智能与临床实验室数据相结合,在推进移动眼健康(MeHealth)方面具有巨大的潜力,特别是在检测眼病方面。本研究旨在探索这种整合,重点是提高诊断的准确性和可及性。这项研究展示了人工智能驱动的眼科诊断工具的潜力,为远程患者监测和持续护理的可靠、循证解决方案铺平了道路。研究结果有助于建立人工智能卫生系统的基础,该系统可以扩展到眼科以外的领域,改善各个领域的医疗保健可及性和患者预后。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信