Article Text
Abstract
Background/aims To develop and validate a deep learning model for automated segmentation of multitype retinal fluid using optical coherence tomography (OCT) images.
Methods We retrospectively collected a total of 2814 completely anonymised OCT images with subretinal fluid (SRF) and intraretinal fluid (IRF) from 141 patients between July 2018 and June 2020, constituting our in-house retinal OCT dataset. On this dataset, we developed a novel semisupervised retinal fluid segmentation deep network (Ref-Net) to automatically identify SRF and IRF in a coarse-to-refine fashion. We performed quantitative and qualitative analyses on the model’s performance while verifying its generalisation ability by using our in-house retinal OCT dataset for training and an unseen Kermany dataset for testing. We also determined the importance of major components in the semisupervised Ref-Net through extensive ablation. The main outcome measures were Dice similarity coefficient (Dice), sensitivity (Sen), specificity (Spe) and mean absolute error (MAE).
Results Our model trained on a handful of labelled OCT images manifested higher performance (Dice: 81.2%, Sen: 87.3%, Spe: 98.8% and MAE: 1.1% for SRF; Dice: 78.0%, Sen: 83.6%, Spe: 99.3% and MAE: 0.5% for IRF) over most cutting-edge segmentation models. It obtained expert-level performance with only 80 labelled OCT images and even exceeded two out of three ophthalmologists with 160 labelled OCT images. Its satisfactory generalisation capability across an unseen dataset was also demonstrated.
Conclusion The semisupervised Ref-Net required only la few labelled OCT images to generate outstanding performance in automate segmentation of multitype retinal fluid, which has the potential for providing assistance for clinicians in the management of ocular disease.
- imaging
- retina
- diagnostic tests/investigation
Data availability statement
Data are available upon reasonable request. The data are available from the corresponding author on reasonable request.
Statistics from Altmetric.com
Data availability statement
Data are available upon reasonable request. The data are available from the corresponding author on reasonable request.
Footnotes
Contributors Conceptualisation, investigation, writing of the original draft and preparation: FL and WZP; methodology, supervision, funding acquisition and project administration: FL; software, validation and visualisation: WZP and WJX; formal analysis: FL; resources, data curation and writing (review and editing): HDZ; Responsible for the overall content as the guarantor: FL. All authors have read and agreed to the published version of the manuscript.
Funding This research was funded by the National Key Research and Development Program of China (grant number 2020YFC2008704) and the National Natural Science Foundation of China (grant numbers 51675321).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Linked Articles
- Highlights from this issue