Article Text
Abstract
Background/aims To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.
Methods A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).
Results The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.
Conclusion The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.
- glaucoma
- imaging
Data availability statement
Data are available on reasonable request.
Statistics from Altmetric.com
Footnotes
Contributors KN and RA researched literature and conceived the study. KN, RA and MT were involved in protocol development, gaining ethical approval. All authorswere involved in patient recruitment and data analysis. RA wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript.
Funding This study was supported in part by grants (nos. 19H01114, 18KK0253 and 20K09784 (RA)) from the Ministry of Education, Culture, Sports, Science and Technology of Japan and The Translational Research program; Strategic Promotion for practical application of Innovative medical Technology (TR-SPRINT) from the Japan Agency for Medical Research and Development (AMED) (no grant number), grant AIP acceleration research from the Japan Science and Technology Agency (RA) (no grant number), and grants from the Suzuken Memorial Foundation (no grant number) and the Mitsui Life Social Welfare Foundation (no grant number).
Competing interests NS, MT, KM, HM and RA reported that they are coinventors on a patent for the deep learning system used in this study (Tokugan 2017-196870). Potential conflicts of interests are managed according to institutional policies of the University of Tokyo.
Provenance and peer review Not commissioned; externally peer reviewed.
Linked Articles
- At a glance