Article Text
Abstract
Background Deep learning (DL) is promising to detect glaucoma. However, patients’ privacy and data security are major concerns when pooling all data for model development. We developed a privacy-preserving DL model using the federated learning (FL) paradigm to detect glaucoma from optical coherence tomography (OCT) images.
Methods This is a multicentre study. The FL paradigm consisted of a ‘central server’ and seven eye centres in Hong Kong, the USA and Singapore. Each centre first trained a model locally with its own OCT optic disc volumetric dataset and then uploaded its model parameters to the central server. The central server used FedProx algorithm to aggregate all centres’ model parameters. Subsequently, the aggregated parameters are redistributed to each centre for its local model optimisation. We experimented with three three-dimensional (3D) networks to evaluate the stabilities of the FL paradigm. Lastly, we tested the FL model on two prospectively collected unseen datasets.
Results We used 9326 volumetric OCT scans from 2785 subjects. The FL model performed consistently well with different networks in 7 centres (accuracies 78.3%–98.5%, 75.9%–97.0%, and 78.3%–97.5%, respectively) and stably in the 2 unseen datasets (accuracies 84.8%-87.7%, 81.3%-84.8%, and 86.0%–87.8%, respectively). The FL model achieved non-inferior performance in classifying glaucoma compared with the traditional model and significantly outperformed the individual models.
Conclusion The 3D FL model could leverage all the datasets and achieve generalisable performance, without data exchange across centres. This study demonstrated an OCT-based FL paradigm for glaucoma identification with ensured patient privacy and data security, charting another course toward the real-world transition of artificial intelligence in ophthalmology.
- Glaucoma
Data availability statement
Data are available upon reasonable request. The study protocol, the statistical analysis plan and the packaged model are available upon reasonable request. Such requests are decided on a case-by-case basis. Proposals should be directed to carolcheung@cuhk.edu.hk.
Statistics from Altmetric.com
Data availability statement
Data are available upon reasonable request. The study protocol, the statistical analysis plan and the packaged model are available upon reasonable request. Such requests are decided on a case-by-case basis. Proposals should be directed to carolcheung@cuhk.edu.hk.
Footnotes
ARR and XW are joint first authors.
P-AH, CCT and CYC contributed equally.
Contributors CYC and ARR conceived and designed the study. CCT, C-PP, HY and NML supervised the study design and data collection. XW developed and validated the deep learning model supervised by P-AH with the help of clinical input from ARR, CYC, PPC, NCYC, MOMW and CCT. PPC, MOMW and CCT provided data from Chinese University of Hong Kong Eye Centre and Hong Kong Eye Hospital. NCYC, WWKY and ALY provided data from PWH and AHNH. H-WY provided the data from TMEC. SSM and RTC provided the data from the Stanford. Y-CT, C-YC and TYW provided the data from SERI. ARR performed the statistical analysis. ARR and XW wrote the initial draft. CYC is guarantor.
Funding This study was funded by the Innovation and Technology Fund, Hong Kong SAR, China (MRP/056/20X).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Linked Articles
- Highlights from this issue