The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.
Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).
The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.
Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.