Charles Scales, John Bai, David Murakami, Joshua Young, Daniel Cheng, Preeya Gupta, Casey Claypool, Edward Holland, David Kading, Whitney Hauser, Leslie O'Dell, Eugene Osae, Caroline A Blackie
{"title":"内部验证的卷积神经网络管道评估睑板腺结构从睑板摄影。","authors":"Charles Scales, John Bai, David Murakami, Joshua Young, Daniel Cheng, Preeya Gupta, Casey Claypool, Edward Holland, David Kading, Whitney Hauser, Leslie O'Dell, Eugene Osae, Caroline A Blackie","doi":"10.1097/OPX.0000000000002208","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.</p><p><strong>Purpose: </strong>This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.</p><p><strong>Methods: </strong>A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.</p><p><strong>Results: </strong>Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.</p><p><strong>Conclusions: </strong>Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.</p>","PeriodicalId":19649,"journal":{"name":"Optometry and Vision Science","volume":" ","pages":"28-36"},"PeriodicalIF":1.6000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Internal validation of a convolutional neural network pipeline for assessing meibomian gland structure from meibography.\",\"authors\":\"Charles Scales, John Bai, David Murakami, Joshua Young, Daniel Cheng, Preeya Gupta, Casey Claypool, Edward Holland, David Kading, Whitney Hauser, Leslie O'Dell, Eugene Osae, Caroline A Blackie\",\"doi\":\"10.1097/OPX.0000000000002208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Significance: </strong>Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.</p><p><strong>Purpose: </strong>This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.</p><p><strong>Methods: </strong>A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.</p><p><strong>Results: </strong>Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.</p><p><strong>Conclusions: </strong>Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.</p>\",\"PeriodicalId\":19649,\"journal\":{\"name\":\"Optometry and Vision Science\",\"volume\":\" \",\"pages\":\"28-36\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optometry and Vision Science\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/OPX.0000000000002208\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/13 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optometry and Vision Science","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/OPX.0000000000002208","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/13 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Internal validation of a convolutional neural network pipeline for assessing meibomian gland structure from meibography.
Significance: Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.
Purpose: This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.
Methods: A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.
Results: Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.
Conclusions: Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.
期刊介绍:
Optometry and Vision Science is the monthly peer-reviewed scientific publication of the American Academy of Optometry, publishing original research since 1924. Optometry and Vision Science is an internationally recognized source for education and information on current discoveries in optometry, physiological optics, vision science, and related fields. The journal considers original contributions that advance clinical practice, vision science, and public health. Authors should remember that the journal reaches readers worldwide and their submissions should be relevant and of interest to a broad audience. Topical priorities include, but are not limited to: clinical and laboratory research, evidence-based reviews, contact lenses, ocular growth and refractive error development, eye movements, visual function and perception, biology of the eye and ocular disease, epidemiology and public health, biomedical optics and instrumentation, novel and important clinical observations and treatments, and optometric education.