Article Text

Download PDFPDF
Detection of features associated with neovascular age-related macular degeneration in ethnically distinct data sets by an optical coherence tomography: trained deep learning algorithm
  1. Tyler Hyungtaek Rim1,2,
  2. Aaron Y Lee3,
  3. Daniel S Ting1,2,
  4. Kelvin Teo1,2,
  5. Bjorn Kaijun Betzler4,
  6. Zhen Ling Teo1,
  7. Tea Keun Yoo5,
  8. Geunyoung Lee6,
  9. Youngnam Kim6,
  10. Andrew C Lin7,
  11. Seong Eun Kim8,
  12. Yih Chung Tham1,2,
  13. Sung Soo Kim9,
  14. Ching-Yu Cheng1,2,
  15. Tien Yin Wong1,2,
  16. Chui Ming Gemmy Cheung1,2
  1. 1Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  2. 2Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
  3. 3Department of Ophthalmology, University of Washington, Seattle, Washington, USA
  4. 4Yong Loo Lin School of Medicine, National University of Singapore, Singapore
  5. 5Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, Korea (the Republic of)
  6. 6Medi Whale Inc, Seoul, South Korea
  7. 7Department of Ophthalmology, NYU Langone Health, New York University School of Medicine, New York, New York, USA
  8. 8Department of Opthalmology, CHA Bundang Medical Center, CHA Univerisity, Seongnam, South Korea
  9. 9Department of Opthalmology, Yonsei University College of Medicine, Severance Hospital, Institute of Vision Research, Seoul, South Korea
  1. Correspondence to Sung Soo Kim, Department of Ophthalmology, Yonsei University College of Medicine, Severance Hospital Seodaemun-gu 169856, South Korea (semekim@yuhs.ac); and Chui Ming Gemmy Cheung, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751 (gemmy.cheung.c.m@snec.com.sg)

Abstract

Background The ability of deep learning (DL) algorithms to identify eyes with neovascular age-related macular degeneration (nAMD) from optical coherence tomography (OCT) scans has been previously established. We herewith evaluate the ability of a DL model, showing excellent performance on a Korean data set, to generalse onto an American data set despite ethnic differences. In addition, expert graders were surveyed to verify if the DL model was appropriately identifying lesions indicative of nAMD on the OCT scans.

Methods Model development data set—12 247 OCT scans from South Korea; external validation data set—91 509 OCT scans from Washington, USA. In both data sets, normal eyes or eyes with nAMD were included. After internal testing, the algorithm was sent to the University of Washington, USA, for external validation. Area under the receiver operating characteristic curve (AUC) and precision–recall curve (AUPRC) were calculated. For model explanation, saliency maps were generated using Guided GradCAM.

Results On external validation, AUC and AUPRC remained high at 0.952 (95% CI 0.942 to 0.962) and 0.891 (95% CI 0.875 to 0.908) at the individual level. Saliency maps showed that in normal OCT scans, the fovea was the main area of interest; in nAMD OCT scans, the appropriate pathological features were areas of model interest. Survey of 10 retina specialists confirmed this.

Conclusion Our DL algorithm exhibited high performance for nAMD identification in a Korean population, and generalised well to an ethnically distinct, American population. The model correctly focused on the differences within the macular area to extract features associated with nAMD.

  • Degeneration
  • Epidemiology
  • Neovascularisation
  • Retina
View Full Text

Statistics from Altmetric.com

Footnotes

  • THR and AYL contributed equally.

  • Contributors THR, AYL, SSK, C-YC, TYW and CMGC conceptualised the study. THR, AYL, DST, KYCT, BKB, TKY, TZL, GL, YK, AL, YCT and SEK reviewed the literature. THR, AYL, SSK, C-YC, TYW, and CMGC designed the study. THR, AYL, GL, YK and SSK collected the data. TR, GL and YK developed the algorithm. THR, AYL, GL, YK, YCT and AL analysed data. All authors contributed to data interpretation. THR, KYCT, BKB, TKY, TZL, YCT, C-YC, TYW and CMGC drafted the manuscript. YCT, DST, TYW and CMGC provided critical revision.

  • Funding This work was supported by the Agency for Science, Technology and Research of Singapore (A19D1b0095), the National Medical Research Council of Singapore (NMRC/OFLCG/004a/2018; NMRC/CIRG/1488/2018), the National Institutes of Health Grants NIH/NEI K23EY029246, and by an unrestricted grant from Research to Prevent Blindness. The sponsors or funding organisations had no role in the design or conduct of this research.

  • Competing interests THR was a scientific advisor to a start-up company called Medi Whale Inc AYL reports support from the US Food and Drug Administration, grants from Santen, Carl Zeiss Meditec, and Novartis, personal fees from Genentech, Topcon, and Verana Health, outside of the submitted work; This article does not reflect the opinions of the Food and Drug Administration. GL and YNK are working in Medi Whale Inc. TYW and DST hold a patent for a deep learning system developed for use in ophthalmology.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data cannot be shared publicly due to the violation of patient privacy and lack of informed consent for data sharing. Data are available from the Yonsei University, Department of Ophthalmology (contact Prof. Sung Soo Kim, semekim{at}yuhs.ac) for researchers who meet the criteria for access to confidential data.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.