Article Text

Prediction of causative genes in inherited retinal disorder from fundus photography and autofluorescence imaging using deep learning techniques
  1. Yu Fujinami-Yokokawa1,2,3,4,
  2. Hideki Ninomiya2,
  3. Xiao Liu1,
  4. Lizhu Yang1,
  5. Nikolas Pontikos1,3,5,
  6. Kazutoshi Yoshitake6,
  7. Takeshi Iwata6,
  8. Yasunori Sato4,7,
  9. Takeshi Hashimoto4,8,
  10. Kazushige Tsunoda9,
  11. Hiroaki Miyata2,4,
  12. Kaoru Fujinami1,3,5
  13. The Japan Eye Genetics Study (JEGC) Group
    1. 1 Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
    2. 2 Department of Health Policy and Management, School of Medicine, Keio University, Tokyo, Japan
    3. 3 UCL Institute of Ophthalmology, UCL, London, UK
    4. 4 Graduate School of Health Management, Keio University, Tokyo, Japan
    5. 5 Division of Inherited Eye Disease, Medical Retina, Moorfields Eye Hostpial, London, UK
    6. 6 Division of Molecular and Cellular Biology, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
    7. 7 Department of Preventive Medicine and Public Health, Keio University School of Medicine, Tokyo, Japan
    8. 8 Sports Medicine Research Center, Keio University, Tokyo, Japan
    9. 9 Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
    1. Correspondence to Dr Kaoru Fujinami, Laboratory of Visual Physiology/Ophthalmic Genetics, Tokyo Iryo Center, Meguro-ku, Tokyo, Japan; k.fujinami{at}ucl.ac.uk

    Abstract

    Background/Aims To investigate the utility of a data-driven deep learning approach in patients with inherited retinal disorder (IRD) and to predict the causative genes based on fundus photography and fundus autofluorescence (FAF) imaging.

    Methods Clinical and genetic data from 1302 subjects from 729 genetically confirmed families with IRD registered with the Japan Eye Genetics Consortium were reviewed. Three categories of genetic diagnosis were selected, based on the high prevalence of their causative genes: Stargardt disease (ABCA4), retinitis pigmentosa (EYS) and occult macular dystrophy (RP1L1). Fundus photographs and FAF images were cropped in a standardised manner with a macro algorithm. Images for training/testing were selected using a randomised, fourfold cross-validation method. The application program interface was established to reach the learning accuracy of concordance (target: >80%) between the genetic diagnosis and the machine diagnosis (ABCA4, EYS, RP1L1 and normal).

    Results A total of 417 images from 156 Japanese subjects were examined, including 115 genetically confirmed patients caused by the three prevalent causative genes and 41 normal subjects. The mean overall test accuracy for fundus photographs and FAF images was 88.2% and 81.3%, respectively. The mean overall sensitivity/specificity values for fundus photographs and FAF images were 88.3%/97.4% and 81.8%/95.5%, respectively.

    Conclusion A novel application of deep neural networks in the prediction of the causative IRD genes from fundus photographs and FAF, with a high prediction accuracy of over 80%, was highlighted. These achievements will extensively promote the quality of medical care by facilitating early diagnosis, especially by non-specialists, access to care, reducing the cost of referrals, and preventing unnecessary clinical and genetic testing.

    • retina
    • genetics
    • imaging

    Data availability statement

    Data are available upon reasonable request. Raw data were generated from the Japan Eye Genetics Consortium database and obtained with permission from the Japan Eye Genetics Consortium at https://niso.kankakuki.go.jp/opkarte/. Derived data supporting the findings of this study are available from the corresponding author on request.

    http://creativecommons.org/licenses/by-nc/4.0/

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Introduction

    Inherited retinal disorders (IRDs) are an important cause of irreversible blindness worldwide, particularly among working-age adults and children.1–3 In England and Wales, an epidemiological survey reported that IRD accounted for 20.2% of blindness in working-age adults in 2009–2010.2 The provision of accurate diagnosis of IRD is still difficult or unavailable in most of the world due to the limited access to multidisciplinary teams of specialists who can perform specific clinical investigations, including retinal imaging as well as genetic testing, interpretation of genetic results (ie, genetic diagnosis), and counselling.4 There are currently few approved treatment approaches for IRD.5 6 Thus, it is widely recognised that the development of accurate gene-specific diagnosis and novel therapeutic interventions is crucial to meet the urgent unmet needs of people suffering from blindness due to IRD.5–7

    Recently, deep learning techniques have been successfully applied in various medical fields, and the utilisation of machine learning-assisted diagnosis has been extensively promoted.8–10 A deep convolutional neural network (CNN) was first reported in 2012.11 Since then, several software frameworks have been developed for CNNs, which include TensorFlow, PyTorch, Caffe and Theano.9

    In particular, deep learning based on clinical images has been rapidly developed to predict diagnosis of common ocular disorders such as diabetic retinopathy and age-related macular degeneration.8 9 12 13 Abràmoff et al 13 reported an artificial intelligence (AI) system with high diagnostic accuracy (sensitivity: 87.2%, specificity: 90.7%) for automated detection of diabetic retinopathy and diabetic macular oedema, based on fundus photographs and spectral-domain optical coherence tomographic (SD-OCT) images, demonstrating AI’s ability to bring specialty-level diagnostics to primary care settings.

    In contrast, AI-oriented bioinformatic engineering had never been applied to ophthalmic orphan diseases such as IRD until 2019, even though IRD is the most prevalent cause of blindness and requires the largest number of hospital visits,14 which leads to major health burden and high economic costs (£523.3 million in the UK).15 Our team first published the utility of the AI-guided diagnostic system in IRD based on SD-OCT images in 2019, illustrating a high test accuracy (98.5%) for predicting prevalent causative IRD genes.16

    The purpose of this study was to investigate the utility of deep learning and establish an AI-guided automatic diagnosis system mainly targeting non-specialists from fundus photographs and fundus autofluorescence (FAF) images in a large Japanese cohort with IRD.

    Materials and methods

    Informed consent was obtained to undergo clinical and genetic testing and to use medical data for the current study.

    Participants

    Participants with a clinical diagnosis of IRD and available genetic data were studied until 2020 as part of the Japan Eye Genetics Consortium studies. A total of 1302 subjects from 729 families were enrolled for which genotype–phenotype association analyses were completed.17–20 Clinical diagnosis was performed based on comprehensive clinical investigations. A genetic diagnosis was obtained based on whole-exome sequencing with targeted analysis of 301 retinal diseases-associated genes.

    The most common retinal diseases caused by the three majors genes were selected. EYS (Mendelian Inheritance in Man [MIM]: 612424), RP1L1 (MIM: 608581) and ABCA4 (MIM: 601691) were the major genes. The proportions of EYS-associated retinal disease (EYS retinopathy; retinitis pigmentosa (RP) and others), RP1L1 retinopathy (occult macular dystrophy) and ABCA4 retinopathy (Stargardt disease and others) among all Japanese cases of IRD were 16%, 8% and 5%, respectively.18 20

    Fundus photography and FAF imaging

    Fundus photography and FAF imaging were performed with the following equipment: TRC-50DX (Topcon, Tokyo, Japan), TRC-NW8 (Topcon), CR-2 PLUS AF Digital Non-Mydriatic Retinal Camera (Canon, Tokyo, Japan) and HRA II (excitation light 488 nm, barrier filter 500 nm; Heidelberg Engineering, Heidelberg, Germany).

    Categories of genetic diagnosis

    The fundus photographs and FAF images of four categories of patients were extracted. Encrypted clinical images, including fundus photographs and FAF images, were accessed by a certified ophthalmic genetic expert (KF). The categories, defined by genetic diagnosis, were as follows: category 1: ABCA4 retinopathy; category 2: RP1L1 retinopathy; category 3: EYS retinopathy; and category 4: normal. Typical images for each category were selected by a certified ophthalmic genetic expert.

    Processes for training and testing

    Fundus photographs and FAF images of both eyes were automatically or manually cropped into a standardised square shape (with aligning of the fovea at the centre) and adjusted to a spatial resolution of 72 pixels (pix)/inch (500×500 pix2). The images were cropped in a standardised way with the macro algorithm of the software (Adobe Photoshop CC V.20.0.4, San Jose, California). One image per eye at the latest examination was selected for the learning/testing processing by a certified ophthalmic genetic expert (KF). The algorithm for pipeline analyses was a web-based deep learning platform (MedicMind) which uses TensorFlow Inception V.3 (Alphabet, Mountain View, California). A CNN with an applied data set determined the learning parameters.

    Four categories based on clinical and genetic diagnoses were applied: ABCA4 retinopathy, EYS retinopathy, RP1L1 retinopathy and normal (figure 1). After preparation of images for the four-gene categories, patients/subjects were randomly split into a training set and a test set at a 3:1 ratio (online supplemental figure 1). This random split for creating a training/test set was based on patients/subjects to avoid a confounding effect of the similarity between the eyes of the same patient/subject. Evaluations were conducted with a randomised, four-fold cross-validation method, and the accuracy of concordance (target: >80%) between the genetic diagnosis and the machine diagnosis was calculated through training to fix the application program interface (API) for further testing. Saliency maps of characteristic features based on fundus photographs and FAF images detected by API were investigated both in concordant and discordant cases between the machine diagnosis and the original genetic diagnosis.

    Figure 1

    Representative cases from the four categories. Fundus photographs and fundus autofluorescence (FAF) images showing characteristic phenotypical features of the four categories are demonstrated: ABCA4 retinopathy, EYS retinopathy, RP1L1 retinopathy and normal. Fundus photographs: ABCA4 retinopathy: macular atrophy and yellowish flecks at the level of RPE; EYS retinopathy: peripheral retinal atrophy, vessel attenuation and pigmentation at the level of RPE; RP1L1 retinopathy: normal fundus appearance; normal: normal appearance. FAF images: ABCA4 retinopathy: an area of low AF density at the macula and foci of abnormal AF; EYS retinopathy: diffuse abnormality of low AF in the peripheral retina and a ring of AF enhancement at the macula; RP1L1 retinopathy: abnormal AF density at the fovea; normal: normal appearance. FAF, fundus autofluorescence; RPE, retinal pigment epithelium.

    Integrative assessment of learning performance

    Integrative assessment of learning performance was conducted based on each diagnostic category by sensitivity/specificity prediction results from fundus photographs and FAF images. The area under the curve (AUC) was calculated with the probabilities for overall prediction results of fundus photography and FAF imaging (JMP, SAS Institute, Cary, North Carolina).

    Results

    Participants

    A total of 417 images from 156 Japanese subjects were examined, including 115 molecularly proven patients caused by the three major causative genes and 41 normal subjects. The detailed filtration/selection flow of the examined images is presented in online supplemental figure 2.

    Fundus photography

    From 149 probands, 1107 images were reviewed in total. Adequate data quality of 200 images from 102 probands (68.5%) was confirmed. Forty-seven probands were excluded from the analyses due to the inadequate data set/quality detected by the certified ophthalmic genetic expert; the exclusion criteria included missing data, presence of recording artefacts and so on. Images of both eyes were included for 98 probands, and images of one eye were included in 4 probands (1 patient with ABCA4 retinopathy, 2 with EYS retinopathy and 1 with RP1L1 retinopathy).

    The median age at examination of the 102 probands was 49.5 years (range, 11–89 years). There were 58 females (56.9%) and 44 males (43.1%). All 102 probands were originally from Japan (East Asia). Fifty-nine images from 30 age-matched control subjects without ocular disorders were selected for the analysis. One image from one normal subject was unavailable.

    In total, a data set containing 259 images from 132 patients/subjects was applied for deep learning; this data set comprised 41 images of ABCA4 retinopathy, 94 images of EYS retinopathy, 65 images of RP1L1 retinopathy and 59 normal images.

    The training and test results of deep learning performance based on fundus photographs for prediction of the three causative genes (ABCA4, EYS, RP1L1) are presented in table 1.

    Table 1

    Detailed training and test results based on fundus photography images for predicting the causative genes in inherited retinal disorder

    The results for training accuracy are summarised in online supplemental table 1. Sufficient training accuracy over 87.5% was obtained. The mean training accuracy of the four repeated experiments was 91.8%. The mean sensitivity per gene category was 88.2% for ABCA4 retinopathy, 97.8% for EYS retinopathy, 95.6% for RP1L1 retinopathy and 80.2% for normal. The mean specificity was 98.2% for ABCA4 retinopathy, 97.5% for EYS retinopathy, 94.8% for RP1L1 retinopathy and 98.5% for normal.

    The detailed test accuracy is shown in table 1. The mean test accuracy of the four repeated experiments was 88.2%. The mean sensitivity was 88.2% for ABCA4 retinopathy, 88.4% for EYS retinopathy, 94.4% for RP1L1 retinopathy and 82.9% for normal. The mean specificity was 100% for ABCA4 retinopathy, 98.1% for EYS retinopathy, 92.9% for RP1L1 retinopathy and 96.7% for normal, according to the four replications of the experiment.

    Overall, considerably high specificity (>95%) was revealed for ABCA4 retinopathy, EYS retinopathy and normal. High specificity (>90%) was identified in RP1L1 retinopathy. High sensitivity (>90%) was detected in RP1L1 retinopathy.

    Saliency maps of the characteristic features of eight representative fundus photographs detected by the API are presented in figure 2. Characteristic features of each category were identified in the concordant cases: macular atrophy and flecks in ABCA4 retinopathy, peripheral atrophic area and attenuated vessels in EYS retinopathy, macular vessels in RP1L1 retinopathy, and no particular findings in the normal category. The discordant cases demonstrated atypical features: peripheral atrophic changes in ABCA4 retinopathy, depigmented changes at the macula in EYS retinopathy, no particular findings in RP1L1 retinopathy and macular vessels in a normal subject.

    Figure 2

    Saliency maps of the characteristic features of eight representative fundus photographs and eight fundus autofluorescence images detected by the application program interface. Saliency maps of characteristic features based on fundus photographs detected by the application program interface developed with deep learning are presented, indicating concordant and discordant results between the machine diagnosis and the original genetic diagnosis. (A) Fundus photography. Characteristic features of macular atrophy and flecks in ABCA4 retinopathy and peripheral atrophic areas and attenuated vessels in EYS retinopathy were demonstrated in concordant cases. Macular vessels in RP1L1 retinopathy and no particular findings in the normal category were identified in the concordant cases. Peripheral atrophic changes in ABCA4 retinopathy and depigmented changes at the macula in EYS retinopathy were featured in the discordant cases. Macular vessels in a normal subject and no particular findings in RP1L1 retinopathy were noted in the discordant cases. (B) Fundus autofluorescence (FAF) imaging. Characteristic features of an area of low AF density at the macula and foci of abnormal AF in ABCA4 retinopathy and diffuse abnormality of low AF in the peripheral retina and a ring of AF enhancement at the macula in EYS retinopathy were demonstrated in the concordant cases. Vessels are intensively featured in RP1L1 retinopathy, and vessels and macular pigment are prominent in normal category. A widespread area of low AF in ABCA4 retinopathy and an area of low AF at the macula and diffuse peripheral atrophies of low AF in EYS retinopathy were featured in the discordant cases. Vessels and macular pigment were featured in RP1L1 retinopathy, and vessels were prominent in a normal subject in discordant cases.

    FAF imaging

    From 149 probands, 1002 images were reviewed. Adequate data quality of 115 images from 59 probands (39.6%) was confirmed. Ninety probands were excluded from the analysis due to the inadequate data set/quality detected by the certified ophthalmic genetic expert; the exclusion criteria included missing data, presence of recording artefacts and so on. Images of both eyes were included for 56 probands, and images of one eye were included for 4 probands (1 patient with ABCA4 retinopathy, 1 with EYS retinopathy and 1 with RP1L1 retinopathy).

    The median age at examination of the 59 probands was 47.0 years (range, 11–85 years). There were 31 females (52.5%) and 28 males (47.5%). All 59 probands were originally from Japan (East Asia). Forty-three images from 23 age-matched control subjects without no ocular disorders were selected for the analysis. Three images from three normal subjects were unavailable.

    In total, a data set containing 158 images of 82 patients/subjects was applied for deep learning, comprising 37 images of ABCA4 retinopathy, 35 images of EYS retinopathy, 43 images of RP1L1 retinopathy and 43 normal images.

    The training and test results for FAF-based prediction of the three causative genes (ABCA4, EYS, RP1L1) are presented in table 2.

    Table 2

    Detailed training and test results based on fundus autofluorescence images for predicting the causative genes in inherited retinal disorder

    The results for training accuracy are summarised in online supplemental table 2. Sufficient training accuracy over 81.3% was obtained. The mean training accuracy of the four repeated experiments was 90.7%. The mean sensitivity per gene category was 100% for ABCA4 retinopathy, 80.7% for EYS retinopathy, 85.0% for RP1L1 retinopathy and 85.7% for normal. The mean specificity was 95.1% for ABCA4 retinopathy, 100% for EYS retinopathy, 98.9% for RP1L1 retinopathy and 93.2% for normal.

    The detailed test accuracy is shown in table 2. The mean test accuracy of the four repeated experiments was 81.3%. The mean sensitivity was 97.5% for ABCA4 retinopathy, 70.7% for EYS retinopathy, 64.9% for RP1L1 retinopathy and 92.9% for normal. The mean specificity was 94.8% for ABCA4 retinopathy, 99.2% for EYS retinopathy, 96.3% for RP1L1 retinopathy and 84.3% for normal, according to the four replications of the experiment.

    Overall, a considerably high level of sensitivity (>95%) was revealed in ABCA4 retinopathy, and high sensitivity (>90%) was identified in RP1L1 retinopathy. Considerably high specificity (>95%) was detected in ABCA4 retinopathy, EYS retinopathy and RP1L1 retinopathy. Low sensitivity (<80%) was found in EYS retinopathy. Considerably low sensitivity (<70%) was observed in RP1L1 retinopathy.

    Saliency maps of the characteristic features of eight representative FAF images detected by API are presented in figure 2. Characteristic features of each category were identified in the concordant cases: an area of low AF density at the macula and foci of abnormal AF in ABCA4 retinopathy, diffuse abnormality of low AF in the peripheral retina and a ring of AF enhancement at the macula in EYS retinopathy, retinal vessels in RP1L1 retinopathy, and vessels and macular pigment in the normal category. The discordant cases demonstrated atypical features: a widespread area of low AF in ABCA4 retinopathy, an area of low AF at the macula and diffuse peripheral atrophies of low AF in EYS retinopathy, vessels and macular pigment in RP1L1 retinopathy, and vessels in normal category.

    Integrative assessment of learning performance

    Integrative assessment of learning performance was conducted based on each diagnostic category by using the sensitivity and specificity of the prediction results from the two clinical imaging modalities (table 3).

    Table 3

    Integrative assessment of overall prediction results of fundus photography and fundus autofluorescence (FAF) imaging

    High specificity (>90%) was identified for both fundus photography and FAF imaging in ABCA4 retinopathy, EYS retinopathy and RP1L1 retinopathy. High sensitivity (>90%) for FAF imaging in ABCA4 retinopathy, for fundus photography in RP1L1 retinopathy and for FAF imaging in normal category was detected. Low sensitivity (<80%) was identified in EYS retinopathy, and considerably low sensitivity (<70%) was detected in RP1L1 retinopathy.

    The AUC for the overall prediction results was 0.708 for fundus photography and 0.703 for FAF imaging (online supplemental figure 3).

    Discussion

    The performance of a deep learning method to establish AI-guided automatic diagnosis systems based on fundus photography and FAF imaging was evaluated in a large Japanese cohort of patients with IRD. The accuracy of the automatic prediction of classification for each major causative gene was obtained based on 417 clinical images, with a mean overall sensitivity and specificity of 85.0% and 95.3%.

    Potential efficacy of AI-guided prediction

    The potential efficacy of this automatic screening/diagnostic system for major IRD genes based on fundus photography and FAF imaging was illustrated. Fundus photographs and FAF images demonstrated high specificity (>90%) in the identification of ABCA4, EYS and RP1L1 retinopathies. The high sensitivity of fundus photographs for RP1L1 retinopathy and FAF images for ABCA4 retinopathy was also identified. These favourable results support the utility of the AI-guided diagnostic API in IRD.

    This is the first report using deep learning technology for fundus photographs and FAF images in molecularly proven retinal orphan disorders. Since IRD exhibits strong gene-characteristic retinal features not strongly influenced by environmental factors, these cases present an ideal application of AI to assist clinical and genetic diagnoses. Recently, Miere et al 21 reported the high gross accuracy (0.95) of AI-guided prediction of four phenotypic categories based on FAF images: RP, Best disease, Stargardt disease and normal. Together with this previous result, clinical imaging should be powerful to automatically predict the phenotypic category or the major causative genes in IRD. The three causative genes selected based on the disease prevalence in the current study typically show different clinical presentations described as Stargardt disease, RP and ocular macular dystrophy. This selection bias should give the advantage in predicting the causative gene. However, the phenotypic spectra of these three genes overlap. It is challenging to establish perfect genotype–phenotype correlations in some cases, which supports the rationale of categorisation-based deep learning to predict the causative genes in these heterogeneous disease groups.

    In addition, deducing the presence of IRD is difficult due to the very limited experience of general ophthalmologists; hence, AI-guided assessment could assist in the accurate identification of patients. Furthermore, given the high specificity in distinguishing between the normal category (96.7%) and the other abnormal categories, fundus photography can be considered a first screening method.

    The API established in the current study is able to provide a real-time AI-guided diagnosis accessible from anywhere, which is promising to improve the quality of medical care by facilitating early diagnosis, reducing the cost for referrals, and preventing unnecessary clinical and genetic testing.

    Interpretation of AI-guided diagnosis

    ABCA4 retinopathy

    The high specificity of both fundus photography and FAF imaging and the high sensitivity of fundus photography were identified. These prediction results are in accord with two previously reported clinical features of ABCA4 retinopathy: (1) the preceding abnormalities can only be detected by FAF imaging22 23; and (2) the fundus appearance in the end stage mimics RP.24–26 Subtle changes in FAF are crucial for early detection of the abnormal metabolic activity caused by the failure of ABCA4 protein function.23 27 Morphological changes in the photoreceptor often occur in the central retina in the early stage.28 Cases of ABCA4 retinopathy in the end stage show entire drastic depigmented or pigmented changes, resembling the changes during the end stage of EYS retinopathy.24–26 29–31

    EYS retinopathy

    The high specificity of both fundus photography and FAF imaging and the low sensitivity of fundus photography were identified. The prediction results for EYS retinopathy based on fundus photographs in the current study were compatible with previously reported features,19 32 while the prediction results based on FAF were not. The characteristic feature of peripheral retinal atrophy with vessel attenuation and pigmentation is frequently detected by fundus photographs.19 32 Some of the characteristic features were beyond the scope of the images, which were cropped in the process of standardisation; this limitation may have led to low sensitivity, with some diagnostic features missing from the fundus photographs. In the current study, the cropping process was mandatory. Expanding the use of a wide-field fundus/FAF recording could improve the sensitivity and specificity of the method for detecting abnormalities located in the peripheral retina.

    RP1L1 retinopathy

    The high specificity of both fundus photography and FAF imaging and the considerably low sensitivity of FAF imaging were identified. The prediction results of RP1L1 retinopathy based on fundus photographs were more accurate than expected because the absence of fundus abnormality is one of the diagnostic criteria of autosomal dominant RP1L1 retinopathy.33 In general, it is assumed to be challenging to distinguish patients with RP1L1 retinopathy from normal subjects based on fundus photographs. The prediction results for RP1L1 retinopathy based on FAF are consistent with previously reported features34 35: weak or subtle abnormalities are found in half of the patients. In a previous study, 9 of 23 patients (39.1%) with occult macular dystrophy showed FAF abnormalities, while 14 patients (60.9%) exhibited no FAF abnormalities.20 One unique feature of RP1L1 retinopathy is the presence of intact retinal pigment epithelium, which is a potential reason why patients with RP1L1 retinopathy do not present marked FAF abnormalities even with photoreceptor dysfunction at the macula.34 36

    Intriguingly, saliency maps demonstrated characteristic findings of macular vessels on fundus photographs and macular and arcade vessels on FAF images in the current study. These striking findings imply potential characteristic features of RP1L1 retinopathy on retinal images that have not been reported, although confounding factors such as age and hypertension could not be completely excluded. Further comprehensive analyses with annotation data may elucidate the novel features of RP1L1 retinopathy (figure 2).

    Limitations

    The current study involved several limitations that should be considered. The cohort size was relatively small both in the affected groups and the control group. Patients with IRD can be subcategorised into more detailed subsets based on the characteristic features by identifying the presence/absence of macular/mid-peripheral/peripheral atrophies, flecks, and functional deterioration of the macula and/or generalised rod/cone system. As an alternative, subdivision into typical or atypical could be useful for clinicians to interpret the quality of machine learning. AI-guided prediction of causative genes based on these phenotypic subcategories/subdivisions would improve the quality of diagnosis of ophthalmic genetic experts and provide opportunities to discover novel genotype–phenotype associations/correlations through the study processes. However, due to the limited cohort size, the phenotypic subcategory/subdivision-based machine learning was unavailable in the current study. In addition, the phenotypic subcategorisation/subdivision is challenging to perform before application of the API for non-specialists, who are the main target of the established diagnostic system in the current study. Thus, expanding the cohort size by including East Asian patients could allow us to further design/conduct deep learning of detailed phenotypic data for ophthalmic genetic experts to delineate the disease mechanism.

    The severity of disease was not classified in the current study. Therefore, a selection bias may have existed. More extensive cohort studies with standardised data including demographics (family history, inheritance, symptoms and so on) and ocular conditions (visual acuity, visual field and so on) from subjects of various ethnicities could potentially expand the utility of our approach.

    There are over 300 genes that can cause IRD; therefore, the four-class classifier developed in the current study cannot be usefully applied in practice to predict the status of a specific gene from fundus photographs and FAF images in patients with other retinal disease-associated gene diagnoses or a novel gene diagnosis. A more clinically applicable approach may be to train several one-versus-rest classifiers to distinguish one gene from all other genes.

    There is also a current interpretability limitation to AI classifiers that use deep learning. The accuracy of training was variable among the training groups; thus, improving training accuracy in any training group set could potentially improve the quality of API in future studies. Moreover, future comparison studies of the quality between the machine diagnosis and the diagnosis by various human experts (eg, ophthalmic genetic experts, retinal specialists, general ophthalmologists, ophthalmology residents, ophthalmic technicians, clinical geneticists and others) could prove the performance of AI-guided machine diagnosis in the real world.

    Conclusion

    This study illustrated a novel application of deep neural networks in the prediction of the major causative genes (30%) in IRD retinopathies from fundus photographs and FAF, with high accuracy for fundus photographs and relatively high accuracy for FAF. These achievements will extensively promote the improvement of medical care quality, reduce cost, enrich the education of non-specialists and support the application of personalised medicine.

    Data availability statement

    Data are available upon reasonable request. Raw data were generated from the Japan Eye Genetics Consortium database and obtained with permission from the Japan Eye Genetics Consortium at https://niso.kankakuki.go.jp/opkarte/. Derived data supporting the findings of this study are available from the corresponding author on request.

    Ethics statements

    Ethics approval

    The protocol of this study adhered to the tenets of the Declaration of Helsinki and was approved by the local ethics committee, the National Hospital Organization Tokyo Medical Center (H30-1218002; R20-051).

    Acknowledgments

    English proof-reading has been conducted by Springer Nature Author Services. We deeply thank Professor Hong Sheng Chiong, MB BCh BAO, PGDipOphthBS, CEO of oDocs Eye Care, Otago, New Zealand, who provided support in the application of the software (MedicMind).

    References

    Supplementary materials

    Footnotes

    • Twitter @npontikos

    • Collaborators The Japan Eye Genetics Study (JEGC) Group: Chair’s Office: National Institute of Sensory Organs, Takeshi Iwata, Kazushige Tsunoda, Kaoru Fujinami, Shinji Ueno, Kazuki Kuniyoshi, Takaaki Hayashi, Mineo Kondo, Atsushi Mizota, Nobuhisa Naoi, Kei Shinoda, Shuhei Kameya, Hiroyuki Kondo, Taro Kominami, Hiroko Terasaki, Hiroyuki Sakuramoto, Satoshi Katagiri, Kei Mizobuchi, Natsuko Nakamura, Go Mawatari, Toshihide Kurihara, Kazuo Tsubota, Yozo Miyake, Kazutoshi Yoshitake, Toshihide Nishimura, Yoshihide Hayashizaki, Nobuhiro Shimozawa, Masayuki Horiguchi, Shuichi Yamamoto, Manami Kuze, Shigeki Machida, Yoshiaki Shimada, Makoto Nakamura, Takashi Fujikado, Yoshihiro Hotta, Masayo Takahashi, Kiyofumi Mochizuki, Akira Murakami, Hiroyuki Kondo, Susumu Ishida, Mitsuru Nakazawa, Tetsuhisa Hatase, Tatsuo Matsunaga, Akiko Maeda, Kosuke Noda, Atsuhiro Tanikawa, Syuji Yamamoto, Hiroyuki Yamamoto, Makoto Araie, Makoto Aihara, Toru Nakazawa, Tetsuju Sekiryu, Kenji Kashiwagi, Kenjiro Kosaki, Carninci Piero, Takeo Fukuchi, Atsushi Hayashi, Katsuhiro Hosono, Keisuke Mori, Kouji Tanaka, Koichi Furuya, Keiichirou Suzuki, Ryo Kohata, Yasuo Yanagi, Yuriko Minegishi, Daisuke Iejima, Akiko Suga, Brian P Rossmiller, Yang Pan, Tomoko Oshima, Mao Nakayama, Megumi Yamamoto, Naoko Minematsu, Daisuke Mori, Yusuke Kijima, Kentaro Kurata, Norihiro Yamada, Masayoshi Itoh, Hideya Kawaji, Yasuhiro Murakawa, Ryo Ando, Wataru Saito, Yusuke Murakami, Hiroaki Miyata, Lizhu Yang, Yu Fujinami-Yokokawa, Xiao Liu, Gavin Arno, Nikolas Pontikos, Mihori Kita, Hiroshi Hirose, Katsuyuki Sakai, Yasumasa Otori, Kazuki Yamazawa, Satomi Inoue, Takayuki Kinoshita.

    • Contributors KF and NP have full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: YF-Y, NP, HM, KF. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: YF-Y, KF. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: YF-Y, NP. Obtained funding: KF. Administrative, technical or material support: YF-Y, NP, HM, KF. Supervision: HM, KF.

    • Funding The Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan is supported by grants from Astellas Pharma (NCT03281005), outside the submitted work. KF is supported by grant from Grant-in-Aid for Young Scientists (A) of the Ministry of Education, Culture, Sports, Science and Technology, Japan (16H06269), grant from Grant-in-Aid for Scientists to support international collaborative studies of the Ministry of Education, Culture, Sports, Science and Technology, Japan (16KK01930002), grant from the National Hospital Organization Network Research Fund (H30-NHO-Sensory Organs-03), grant from the Japan Agency for Medical Research and Development (18992608), grant from the Ministry of Health, Labour and Welfare (18ek0109355h0001), grant from the Foundation Fighting Blindness Alan Laties Career Development Program (CF-CL-0416-0696-UCL), grant from the Health Labour Sciences Research Grant, the Ministry of Health Labour and Welfare (201711107A), and grants from the Great Britain Sasakawa Foundation Butterfield Awards.

    • Disclaimer The funding sources had no role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.

    • Competing interests Individual investigators who participate in the sponsored project(s) are not directly compensated by the sponsor but may receive salary or other support from the institution to support their effort on the project(s). KF is a paid consultant of Astellas Pharma, Kubota Pharmaceutical Holdings and Acucela, Janssen Pharma, Novartis Pharma, and NightstaRx. KF reports personal fees from Astellas Pharma, personal fees from Kubota Pharmaceutical Holdings, personal fees from Acucela, personal fees from NightstaRx, personal fees from Santen, personal fees from Foundation Fighting Blindness, personal fees from Foundation Fighting Blindness Clinical Research Institute, personal fees from Japanese Ophthalmological Society, and personal fees from Japan Retinitis Pigmentosa Society.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

    Linked Articles