Aims To investigate the efficacy of a bi-modality deep convolutional neural network (DCNN) framework to categorise age-related macular degeneration (AMD) and polypoidal choroidal vasculopathy (PCV) from colour fundus images and optical coherence tomography (OCT) images.
Methods A retrospective cross-sectional study was proposed of patients with AMD or PCV who came to Peking Union Medical College Hospital. Diagnoses of all patients were confirmed by two retinal experts based on diagnostic gold standard for AMD and PCV. Patients with concurrent retinal vascular diseases were excluded. Colour fundus images and spectral domain OCT images were taken from dilated eyes of patients and healthy controls, and anonymised. All images were pre-labelled into normal, dry or wet AMD or PCV. ResNet-50 models were used as the backbone and alternate machine learning models including random forest classifiers were constructed for further comparison. For human-machine comparison, the same testing data set was diagnosed by three retinal experts independently. All images from the same participant were presented only within a single partition subset.
Results On a test set of 143 fundus and OCT image pairs from 80 eyes (20 eyes per-group), the bi-modal DCNN demonstrated the best performance, with accuracy 87.4%, sensitivity 88.8% and specificity 95.6%, and a perfect agreement with diagnostic gold standard (Cohen’s κ 0.828), exceeds slightly over the best expert (Human1, Cohen’s κ 0.810). For recognising PCV, the model outperformed the best expert as well.
Conclusion A bi-modal DCNN for automated classification of AMD and PCV is accurate and promising in the realm of public health.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Contributors ZX, WY and YC designed the experiments. JY, ZX, WY and YC collected and labelled samples. WW, JZ, DD and XL created and trained the AI models. ZY, DC and FH performed the human-machine comparison test. ZX, WW and XL contributed to the interpretation of the results. ZX and WW wrote the manuscript. WY, XL and YC guided revision of the manuscript.
Funding This work was supported by The Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences grant number 2018PT32029, CAMS Initiative for Innovative Medicine grant number 2018-I2M-AI-001, Pharmaceutical collaborative innovation research project of Beijing Science and Technology Commission grant number Z191100007719002, National Natural Science Foundation of China grant number 61672523, Beijing Natural Science Foundation Haidian original innovation joint fund grant number 19L2062 and Beijing Natural Science Foundation grant number 4202033.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request. The data sets generated during and/or analysed during the current study are not publicly available due to the issue of copyright but are available from the corresponding author on reasonable request.