Mini Han Wang, Yi Pan, Xudong Jiang, Zhiyuan Lin, Haoyang Liu, Yunxiao Liu, Jiazheng Cui, Jiaxiang Tan, Chengqi Gong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Moawiya Haddad, Marion Schindler, José Lopes Camilo Da Costa Alves, Junbin Fang, Xiangrong Yu, Kelvin Kam-Lung Chong
{"title":"利用人工智能和临床实验室证据推进眼科移动医疗应用:以眼表疾病为例研究","authors":"Mini Han Wang, Yi Pan, Xudong Jiang, Zhiyuan Lin, Haoyang Liu, Yunxiao Liu, Jiazheng Cui, Jiaxiang Tan, Chengqi Gong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Moawiya Haddad, Marion Schindler, José Lopes Camilo Da Costa Alves, Junbin Fang, Xiangrong Yu, Kelvin Kam-Lung Chong","doi":"10.1002/ila2.70001","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.</p>\n </section>\n </div>","PeriodicalId":100656,"journal":{"name":"iLABMED","volume":"3 1","pages":"64-85"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ila2.70001","citationCount":"0","resultStr":"{\"title\":\"Leveraging Artificial Intelligence and Clinical Laboratory Evidence to Advance Mobile Health Applications in Ophthalmology: Taking the Ocular Surface Disease as a Case Study\",\"authors\":\"Mini Han Wang, Yi Pan, Xudong Jiang, Zhiyuan Lin, Haoyang Liu, Yunxiao Liu, Jiazheng Cui, Jiaxiang Tan, Chengqi Gong, Guanghui Hou, Xiaoxiao Fang, Yang Yu, Moawiya Haddad, Marion Schindler, José Lopes Camilo Da Costa Alves, Junbin Fang, Xiangrong Yu, Kelvin Kam-Lung Chong\",\"doi\":\"10.1002/ila2.70001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.</p>\\n </section>\\n </div>\",\"PeriodicalId\":100656,\"journal\":{\"name\":\"iLABMED\",\"volume\":\"3 1\",\"pages\":\"64-85\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ila2.70001\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"iLABMED\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ila2.70001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"iLABMED","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ila2.70001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Leveraging Artificial Intelligence and Clinical Laboratory Evidence to Advance Mobile Health Applications in Ophthalmology: Taking the Ocular Surface Disease as a Case Study
Background
The advent of mobile health (mHealth) applications has fundamentally transformed the healthcare landscape, particularly within the field of ophthalmology, by providing unprecedented opportunities for remote diagnosis, monitoring, and treatment. Ocular surface diseases, including dry eye disease (DED), are the most common eye diseases that can be detected by mHealth applications. However, most remote artificial intelligence (AI) systems for ocular surface disease detection are predominantly based on self-reported data collected through interviews, which lack the rigor of clinical evidence. These constraints underscore the need to develop robust, evidence-based AI frameworks that incorporate objective health indicators to improve the reliability and clinical utility of remote health applications.
Methods
Two novel deep learning (DL) models, YoloTR and YoloMBTR, were developed to detect key ocular surface indicators (OSIs), including tear meniscus height (TMH), non-invasive Keratograph break-up time (NIKBUT), ocular redness, lipid layer, and trichiasis. Additionally, back propagation neural networks (BPNN) and universal network for image segmentation (U-Net) were employed for image classification and segmentation of meibomian gland images to predict Demodex mite infections. These models were trained on a large dataset from high-resolution devices, including Keratograph 5M and various mobile platforms (Huawei, Apple, and Xiaomi).
Results
The proposed DL models of YoloMBTR and YoloTR outperformed baseline you only look once (YOLO) models (Yolov5n, Yolov6n, and Yolov8n) across multiple performance metrics, including test average precision (AP), validation AP, and overall accuracy. These two models also exhibit superior performance compared to machine plug-in models in KG5M when benchmarked against the gold standard. Using Python's Matplotlib for visualization and SPSS for statistical analysis, this study introduces an innovative proof-of-concept framework leveraging quantitative AI analysis to address critical challenges in ophthalmology. By integrating advanced DL models, the framework offers a robust approach for detecting and quantifying OSIs with a high degree of precision. This methodological advancement bridges the gap between AI-driven diagnostics and clinical ophthalmology by translating complex ocular data into actionable insights.
Conclusions
Integrating AI with clinical laboratory data holds significant potential for advancing mobile eye health (MeHealth), particularly in detecting OSIs. This study aims to explore this integration, focusing on improving diagnostic accuracy and accessibility. This study demonstrates the potential of AI-driven tools in ophthalmic diagnostics, paving the way for reliable, evidence-based solutions in remote patient monitoring and continuous care. The results contribute to the foundation of AI-powered health systems that can extend beyond ophthalmology, improving healthcare accessibility and patient outcomes across various domains.