Zhiqi Chen PhD , Hiroshi Ishikawa MD , Yao Wang PhD , Gadi Wollstein MD , Joel S. Schuman MD
{"title":"Deep-Learning-Based Group Pointwise Spatial Mapping of Structure to Function in Glaucoma","authors":"Zhiqi Chen PhD , Hiroshi Ishikawa MD , Yao Wang PhD , Gadi Wollstein MD , Joel S. Schuman MD","doi":"10.1016/j.xops.2024.100523","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><p>To establish generalizable pointwise spatial relationship between structure and function through occlusion analysis of a deep-learning (DL) model for predicting the visual field (VF) sensitivities from 3-dimensional (3D) OCT scan.</p></div><div><h3>Design</h3><p>Retrospective cross-sectional study.</p></div><div><h3>Participants</h3><p>A total of 2151 eyes from 1129 patients.</p></div><div><h3>Methods</h3><p>A DL model was trained to predict 52 VF sensitivities of 24-2 standard automated perimetry from 3D spectral-domain OCT images of the optic nerve head (ONH) with 12 915 OCT-VF pairs. Using occlusion analysis, the contribution of each individual cube covering a 240 × 240 × 31.25 μm region of the ONH to the model's prediction was systematically evaluated for each OCT-VF pair in a separate test set that consisted of 996 OCT-VF pairs. After simple translation (shifting in x- and y-axes to match the ONH center), group t-statistic maps were derived to visualize statistically significant ONH regions for each VF test point within a group. This analysis allowed for understanding the importance of each super voxel (240 × 240 × 31.25 μm covering the entire 4.32 × 4.32 × 1.125 mm ONH cube) in predicting VF test points for specific patient groups.</p></div><div><h3>Main Outcome Measures</h3><p>The region at the ONH corresponding to each VF test point and the effect of the former on the latter.</p></div><div><h3>Results</h3><p>The test set was divided to 2 groups, the healthy-to-early-glaucoma group (792 OCT-VF pairs, VF mean deviation [MD]: −1.32 ± 1.90 decibels [dB]) and the moderate-to-advanced-glaucoma group (204 OCT-VF pairs, VF MD: −17.93 ± 7.68 dB). Two-dimensional group t-statistic maps (x, y projection) were generated for both groups, assigning related ONH regions to visual field test points. The identified influential structural locations for VF sensitivity prediction at each test point aligned well with existing knowledge and understanding of structure-function spatial relationships.</p></div><div><h3>Conclusions</h3><p>This study successfully visualized the global trend of point-by-point spatial relationships between OCT-based structure and VF-based function without the need for prior knowledge or segmentation of OCTs. The revealed spatial correlations were consistent with previously published mappings. This presents possibilities of learning from trained machine learning models without applying any prior knowledge, potentially robust, and free from bias.</p></div><div><h3>Financial Disclosure(s)</h3><p>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</p></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666914524000599/pdfft?md5=65047abf0529d3473597b4e65c7cb8ae&pid=1-s2.0-S2666914524000599-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524000599","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
To establish generalizable pointwise spatial relationship between structure and function through occlusion analysis of a deep-learning (DL) model for predicting the visual field (VF) sensitivities from 3-dimensional (3D) OCT scan.
Design
Retrospective cross-sectional study.
Participants
A total of 2151 eyes from 1129 patients.
Methods
A DL model was trained to predict 52 VF sensitivities of 24-2 standard automated perimetry from 3D spectral-domain OCT images of the optic nerve head (ONH) with 12 915 OCT-VF pairs. Using occlusion analysis, the contribution of each individual cube covering a 240 × 240 × 31.25 μm region of the ONH to the model's prediction was systematically evaluated for each OCT-VF pair in a separate test set that consisted of 996 OCT-VF pairs. After simple translation (shifting in x- and y-axes to match the ONH center), group t-statistic maps were derived to visualize statistically significant ONH regions for each VF test point within a group. This analysis allowed for understanding the importance of each super voxel (240 × 240 × 31.25 μm covering the entire 4.32 × 4.32 × 1.125 mm ONH cube) in predicting VF test points for specific patient groups.
Main Outcome Measures
The region at the ONH corresponding to each VF test point and the effect of the former on the latter.
Results
The test set was divided to 2 groups, the healthy-to-early-glaucoma group (792 OCT-VF pairs, VF mean deviation [MD]: −1.32 ± 1.90 decibels [dB]) and the moderate-to-advanced-glaucoma group (204 OCT-VF pairs, VF MD: −17.93 ± 7.68 dB). Two-dimensional group t-statistic maps (x, y projection) were generated for both groups, assigning related ONH regions to visual field test points. The identified influential structural locations for VF sensitivity prediction at each test point aligned well with existing knowledge and understanding of structure-function spatial relationships.
Conclusions
This study successfully visualized the global trend of point-by-point spatial relationships between OCT-based structure and VF-based function without the need for prior knowledge or segmentation of OCTs. The revealed spatial correlations were consistent with previously published mappings. This presents possibilities of learning from trained machine learning models without applying any prior knowledge, potentially robust, and free from bias.
Financial Disclosure(s)
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.