Article Text

Download PDFPDF
Automated diagnoses of age-related macular degeneration and polypoidal choroidal vasculopathy using bi-modal deep convolutional neural networks
  1. Zhiyan Xu1,2,
  2. Weisen Wang3,
  3. Jingyuan Yang1,2,
  4. Jianchun Zhao4,
  5. Dayong Ding4,
  6. Feng He1,2,
  7. Di Chen1,2,
  8. Zhikun Yang1,2,
  9. Xirong Li5,
  10. Weihong Yu1,2,
  11. Youxin Chen1,2
  1. 1Department of Ophthalmology, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
  2. 2Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
  3. 3AI & Media Computing Lab, School of Information, Renmin University of China, Beijing, China
  4. 4Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
  5. 5Department for state-of-the-art ophthalmology AI research & development, Key Lab of DEKE, Renmin University of China, Beijing, China
  1. Correspondence to Professor Youxin Chen, Department of Ophthalmology, Peking Union Medical College Hospital, Beijing 100730, China; chenyouxinpumch{at}163.com

Abstract

Aims To investigate the efficacy of a bi-modality deep convolutional neural network (DCNN) framework to categorise age-related macular degeneration (AMD) and polypoidal choroidal vasculopathy (PCV) from colour fundus images and optical coherence tomography (OCT) images.

Methods A retrospective cross-sectional study was proposed of patients with AMD or PCV who came to Peking Union Medical College Hospital. Diagnoses of all patients were confirmed by two retinal experts based on diagnostic gold standard for AMD and PCV. Patients with concurrent retinal vascular diseases were excluded. Colour fundus images and spectral domain OCT images were taken from dilated eyes of patients and healthy controls, and anonymised. All images were pre-labelled into normal, dry or wet AMD or PCV. ResNet-50 models were used as the backbone and alternate machine learning models including random forest classifiers were constructed for further comparison. For human-machine comparison, the same testing data set was diagnosed by three retinal experts independently. All images from the same participant were presented only within a single partition subset.

Results On a test set of 143 fundus and OCT image pairs from 80 eyes (20 eyes per-group), the bi-modal DCNN demonstrated the best performance, with accuracy 87.4%, sensitivity 88.8% and specificity 95.6%, and a perfect agreement with diagnostic gold standard (Cohen’s κ 0.828), exceeds slightly over the best expert (Human1, Cohen’s κ 0.810). For recognising PCV, the model outperformed the best expert as well.

Conclusion A bi-modal DCNN for automated classification of AMD and PCV is accurate and promising in the realm of public health.

  • macula
  • degeneration
  • retina
  • choroid
  • neovascularisation
View Full Text

Statistics from Altmetric.com

Footnotes

  • Contributors ZX, WY and YC designed the experiments. JY, ZX, WY and YC collected and labelled samples. WW, JZ, DD and XL created and trained the AI models. ZY, DC and FH performed the human-machine comparison test. ZX, WW and XL contributed to the interpretation of the results. ZX and WW wrote the manuscript. WY, XL and YC guided revision of the manuscript.

  • Funding This work was supported by The Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences grant number 2018PT32029, CAMS Initiative for Innovative Medicine grant number 2018-I2M-AI-001, Pharmaceutical collaborative innovation research project of Beijing Science and Technology Commission grant number Z191100007719002, National Natural Science Foundation of China grant number 61672523, Beijing Natural Science Foundation Haidian original innovation joint fund grant number 19L2062 and Beijing Natural Science Foundation grant number 4202033.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request. The data sets generated during and/or analysed during the current study are not publicly available due to the issue of copyright but are available from the corresponding author on reasonable request.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.