Article Text

Download PDFPDF
Deep learning-assisted (automatic) diagnosis of glaucoma using a smartphone
  1. Kenichi Nakahara1,
  2. Ryo Asaoka2,3,4,5,6,
  3. Masaki Tanito7,
  4. Naoto Shibata1,
  5. Keita Mitsuhashi1,
  6. Yuri Fujino2,6,7,
  7. Masato Matsuura6,
  8. Tatsuya Inoue6,8,
  9. Keiko Azuma6,
  10. Ryo Obata6,
  11. Hiroshi Murata6
  1. 1Queue Inc, Tokyo, Japan
  2. 2Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan
  3. 3Seirei Christopher University, Shizuoka, Hamamatsu, Japan
  4. 4Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan
  5. 5The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Japan
  6. 6Department of Ophthalmology, University of Tokyo, Tokyo, Japan
  7. 7Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
  8. 8Department of Ophthalmology and Microtechnology, Yokohama City University School of Medicine, Kanagawa, Japan
  1. Correspondence to Dr Ryo Asaoka, Department of Ophthalmology, Toyo University, Bunkyo-ku, Tokyo, Japan; ryoasa0120{at}mac.com

Abstract

Background/aims To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.

Methods A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).

Results The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.

Conclusion The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.

  • glaucoma
  • imaging

Data availability statement

Data are available on reasonable request.

Statistics from Altmetric.com

Data availability statement

Data are available on reasonable request.

View Full Text

Footnotes

  • Contributors KN and RA researched literature and conceived the study. KN, RA and MT were involved in protocol development, gaining ethical approval. All authorswere involved in patient recruitment and data analysis. RA wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript.

  • Funding This study was supported in part by grants (nos. 19H01114, 18KK0253 and 20K09784 (RA)) from the Ministry of Education, Culture, Sports, Science and Technology of Japan and The Translational Research program; Strategic Promotion for practical application of Innovative medical Technology (TR-SPRINT) from the Japan Agency for Medical Research and Development (AMED) (no grant number), grant AIP acceleration research from the Japan Science and Technology Agency (RA) (no grant number), and grants from the Suzuken Memorial Foundation (no grant number) and the Mitsui Life Social Welfare Foundation (no grant number).

  • Competing interests NS, MT, KM, HM and RA reported that they are coinventors on a patent for the deep learning system used in this study (Tokugan 2017-196870). Potential conflicts of interests are managed according to institutional policies of the University of Tokyo.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.