Background/Aim To train and validate the prediction performance of the deep learning (DL) model to predict visual field (VF) in central 10° from spectral domain optical coherence tomography (SD-OCT).
Methods This multicentre, cross-sectional study included paired Humphrey field analyser (HFA) 10-2 VF and SD-OCT measurements from 591 eyes of 347 patients with open-angle glaucoma (OAG) or normal subjects for the training data set. We trained a convolutional neural network (CNN) for predicting VF threshold (TH) sensitivity values from the thickness of the three macular layers: retinal nerve fibre layer, ganglion cell layer+inner plexiform layer and outer segment+retinal pigment epithelium. We implemented pattern-based regularisation on top of CNN to avoid overfitting. Using an external testing data set of 160 eyes of 131 patients with OAG, the prediction performance (absolute error (AE) and R2 between predicted and actual TH values) was calculated for (1) mean TH in whole VF and (2) each TH of 68 points. For comparison, we trained support vector machine (SVM) and multiple linear regression (MLR).
Results AE of whole VF with CNN was 2.84±2.98 (mean±SD) dB, significantly smaller than those with SVM (5.65±5.12 dB) and MLR (6.96±5.38 dB) (all, p<0.001). Mean of point-wise mean AE with CNN was 5.47±3.05 dB, significantly smaller than those with SVM (7.96±4.63 dB) and MLR (11.71±4.15 dB) (all, p<0.001). R2 with CNN was 0.74 for the mean TH of whole VF, and 0.44±0.24 for the overall 68 points.
Conclusion DL model showed considerably accurate prediction of HFA 10-2 VF from SD-OCT.
- Visual Fields
- Optical Coherence
- machine learning
- deep learning
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Correction notice This paper has been corrected since it was published online. Figures 4 to 6 were mixed up and and we have now updated them.
Contributors YH and RA had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: YH, RA, TK, HS and KY. Acquisition, analysis and interpretation of data: YH, RA, TK, SA, HM, YF, MM, AM, KM, YI, TK, JY, KI, MT and KY. Drafting of the manuscript: YH and RA. Critical revision of the manuscript for important intellectual content: YH and RA. Statistical analysis: YH and RA.
Funding RA: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant numbers 18KK0253, 19H01114 and 17K11418); the Daiichi Sankyo Foundation of Life Science, Tokyo, Japan; Suzuken Memorial Foundation, Tokyo, Japan; The Translational Research Program; Strategic Promotion for practical application of Innovative medical Technology (TR-SPRINT) from Japan Agency for Medical Research and Development (AMED); JST-AIP JPMJCR19U4. HM: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 25861618). YF: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 20768254). MM: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 00768351). KY: JST-AIP JPMJCR19U4 and The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 19H01114).
Competing interests None declared.
Data sharing statement Data are available upon reasonable request.
Provenance and peer review Not commissioned; externally peer reviewed.