Article Text

Download PDFPDF
Dual-input convolutional neural network for glaucoma diagnosis using spectral-domain optical coherence tomography
  1. Sukkyu Sun1,
  2. Ahnul Ha2,3,
  3. Young Kook Kim2,4,
  4. Byeong Wook Yoo5,
  5. Hee Chan Kim1,6,7,
  6. Ki Ho Park2,4
  1. 1 Interdisciplinary Program in Bioengineering, Graduate school, Seoul National University, Seoul, South Korea
  2. 2 Department of Ophthalmology, Seoul National University College of Medicine, Seoul, South Korea
  3. 3 Department of Ophthalmology, Jeju National University Hospital, Jeju-si, South Korea
  4. 4 Department of Ophthalmology, Seoul National University Hospital, Seoul, South Korea
  5. 5 Samsung Research, Samsung Electronics, Seoul, South Korea
  6. 6 Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, South Korea
  7. 7 Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, South Korea
  1. Correspondence to Hee Chan Kim, Department of Biomedical Engineering, Seoul National University Hospital, 101 Daehak-ro, Jongro-gu, Jongno-gu 03080, Korea (the Republic of); and Ki Ho Park, Ophthalmology, Seoul National University College of Medicine, 101 Daehak-ro, Jongro-gu, Seoul 03080, Korea (the Republic of); hckim{at} kihopark{at}


Background/Aims To evaluate, with spectral-domain optical coherence tomography (SD-OCT), the glaucoma-diagnostic ability of a deep-learning classifier.

Methods A total of 777 Cirrus high-definition SD-OCT image sets of the retinal nerve fibre layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) of 315 normal subjects, 219 patients with early-stage primary open-angle glaucoma (POAG) and 243 patients with moderate-to-severe-stage POAG were aggregated. The image sets were divided into a training data set (252 normal, 174 early POAG and 195 moderate-to-severe POAG) and a test data set (63 normal, 45 early POAG and 48 moderate-to-severe POAG). The visual geometry group (VGG16)-based dual-input convolutional neural network (DICNN) was adopted for the glaucoma diagnoses. Unlike other networks, the DICNN structure takes two images (both RNFL and GCIPL) as inputs. The glaucoma-diagnostic ability was computed according to both accuracy and area under the receiver operating characteristic curve (AUC).

Results For the test data set, DICNN could distinguish between patients with glaucoma and normal subjects accurately (accuracy=92.793%, AUC=0.957 (95% CI 0.943 to 0.966), sensitivity=0.896 (95% CI 0.896 to 0.917), specificity=0.952 (95% CI 0.921 to 0.952)). For distinguishing between patients with early-stage glaucoma and normal subjects, DICNN’s diagnostic ability (accuracy=85.185%, AUC=0.869 (95% CI 0.825 to 0.879), sensitivity=0.921 (95% CI 0.813 to 0.905), specificity=0.756 (95% CI 0.610 to 0.790)]) was higher than convolutional neural network algorithms that trained with RNFL or GCIPL separately.

Conclusion The deep-learning algorithm using SD-OCT can distinguish normal subjects not only from established patients with glaucoma but also from patients with early-stage glaucoma. The deep-learning model with DICNN, as trained by both RNFL and GCIPL thickness map data, showed a high diagnostic ability for discriminatingpatients with early-stage glaucoma from normal subjects.

  • Glaucoma
  • Imaging
  • Macula
  • Optic Nerve

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • SS and AH contributed equally to this work as co-first authors.

  • Contributors Study design: SS, AH, YKK, HCK, KHP. Writing the article: SS, AH. Data collection: AH, YKK, BWY, KHP. Analysis and interpretation of the data: SS, YKK, BWY, HCK. Literature search: SS, AH, BWY. Critical revision of the article: SS, AH, YKK, HCK, KHP. Final approval of the article: HCK, KH.

  • Funding This study was supported by grant no. 03-2017-0300 from the SNUH research fund.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles

  • At a glance
    Frank Larkin