Article Text

Download PDFPDF
Deep learning model to predict visual field in central 10° from optical coherence tomography measurement in glaucoma
  1. Yohei Hashimoto1,
  2. Ryo Asaoka2,3,4,
  3. Taichi Kiwaki5,
  4. Hiroki Sugiura6,
  5. Shotaro Asano5,
  6. Hiroshi Murata1,
  7. Yuri Fujino3,7,
  8. Masato Matsuura8,
  9. Atsuya Miki9,
  10. Kazuhiko Mori10,
  11. Yoko Ikeda10,
  12. Takashi Kanamoto11,
  13. Junkichi Yamagami12,
  14. Kenji Inoue13,
  15. Masaki Tanito14,
  16. Kenji Yamanishi6
  1. 1 Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
  2. 2 Department of Ophthalmology, University of Tokyo Graduate School of Medicine, Tokyo, Japan
  3. 3 Seirei Hamamatsu General Hospital, Shizuoka, Hamamatsu, Japan
  4. 4 Seirei Christopher University, Shizuoka, Hamamatsu, Japan
  5. 5 Graduate School of Information Science and Technology, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
  6. 6 Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
  7. 7 Ophthalmology, The University of Tokyo Hospital, Bunkyo-ku,Japan
  8. 8 University of Tokyo, Tokyo, Japan
  9. 9 Ophthalmology, Osaka Daigaku Daigakuin Igakukei Kenkyuka Igakubu, Suita, Osaka, Japan
  10. 10 Department of Ophthalmology, Kyoto Prefectural University of Medicine, Kyoto, Japan
  11. 11 Ophthalmology, Hiroshima University, Higashihiroshima, Hiroshima, Japan
  12. 12 Ophthalmology, JR Tokyo General Hospital, Tokyo, Japan
  13. 13 Ophthalmology, Inouye Eye Hospital, Tokyo, Japan
  14. 14 Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
  1. Correspondence to Ryo Asaoka, Department of Ophthalmology, The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; ryoasa0120{at}mac.com and Kenji Yamanishi, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan; yamanishi{at}mist.i.u-tokyo.ac.jp

Abstract

Background/Aim To train and validate the prediction performance of the deep learning (DL) model to predict visual field (VF) in central 10° from spectral domain optical coherence tomography (SD-OCT).

Methods This multicentre, cross-sectional study included paired Humphrey field analyser (HFA) 10-2 VF and SD-OCT measurements from 591 eyes of 347 patients with open-angle glaucoma (OAG) or normal subjects for the training data set. We trained a convolutional neural network (CNN) for predicting VF threshold (TH) sensitivity values from the thickness of the three macular layers: retinal nerve fibre layer, ganglion cell layer+inner plexiform layer and outer segment+retinal pigment epithelium. We implemented pattern-based regularisation on top of CNN to avoid overfitting. Using an external testing data set of 160 eyes of 131 patients with OAG, the prediction performance (absolute error (AE) and R2 between predicted and actual TH values) was calculated for (1) mean TH in whole VF and (2) each TH of 68 points. For comparison, we trained support vector machine (SVM) and multiple linear regression (MLR).

Results AE of whole VF with CNN was 2.84±2.98 (mean±SD) dB, significantly smaller than those with SVM (5.65±5.12 dB) and MLR (6.96±5.38 dB) (all, p<0.001). Mean of point-wise mean AE with CNN was 5.47±3.05 dB, significantly smaller than those with SVM (7.96±4.63 dB) and MLR (11.71±4.15 dB) (all, p<0.001). R2 with CNN was 0.74 for the mean TH of whole VF, and 0.44±0.24 for the overall 68 points.

Conclusion DL model showed considerably accurate prediction of HFA 10-2 VF from SD-OCT.

  • Visual Fields
  • Glaucoma
  • Imaging
  • Tomography
  • Optical Coherence
  • Retina
  • machine learning
  • deep learning
View Full Text

Statistics from Altmetric.com

Footnotes

  • Correction notice This paper has been corrected since it was published online. Figures 4 to 6 were mixed up and and we have now updated them.

  • Contributors YH and RA had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: YH, RA, TK, HS and KY. Acquisition, analysis and interpretation of data: YH, RA, TK, SA, HM, YF, MM, AM, KM, YI, TK, JY, KI, MT and KY. Drafting of the manuscript: YH and RA. Critical revision of the manuscript for important intellectual content: YH and RA. Statistical analysis: YH and RA.

  • Funding RA: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant numbers 18KK0253, 19H01114 and 17K11418); the Daiichi Sankyo Foundation of Life Science, Tokyo, Japan; Suzuken Memorial Foundation, Tokyo, Japan; The Translational Research Program; Strategic Promotion for practical application of Innovative medical Technology (TR-SPRINT) from Japan Agency for Medical Research and Development (AMED); JST-AIP JPMJCR19U4. HM: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 25861618). YF: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 20768254). MM: The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 00768351). KY: JST-AIP JPMJCR19U4 and The Ministry of Education, Culture, Sports, Science and Technology of Japan (grant number 19H01114).

  • Competing interests None declared.

  • Data sharing statement Data are available upon reasonable request.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.