Article Text

Download PDFPDF

Using an open-source tablet perimeter (Eyecatcher) as a rapid triage measure for glaucoma clinic waiting areas
  1. Pete R Jones1,
  2. Dan Lindfield2,
  3. David P Crabb1
  1. 1 Division of Optometry and Visual Sciences, City University of London, London, UK
  2. 2 Glaucoma Services, Royal Surrey County Hospital NHS Foundation Trust, Guildford, Surrey, UK
  1. Correspondence to Dr Pete R Jones, Division of Optometry and Visual Sciences, City University of London, London EC1V 0HB, UK; peter.jones{at}city.ac.uk

Abstract

Background Glaucoma services are under unprecedented strain. The UK Healthcare Safety Investigation Branch recently called for new ways to identify glaucoma patients most at risk of developing sight loss, and of filtering-out false-positive referrals. Here, we evaluate the feasibility of one such technology, Eyecatcher: a free, tablet-based ‘triage’ perimeter, designed to be used unsupervised in clinic waiting areas. Eyecatcher does not require a button or headrest: patients are simply required to look at fixed-luminance dots as they appear.

Methods Seventy-seven people were tested twice using Eyecatcher (one eye only) while waiting for a routine appointment in a UK glaucoma clinic. The sample included individuals with an established diagnosis of glaucoma, and false-positive new referrals (no visual field or optic nerve abnormalities). No attempts were made to control the testing environment. Patients wore their own glasses and received minimal task instruction.

Results Eyecatcher was fast (median: 2.5 min), produced results in good agreement with standard automated perimetry (SAP), and was rated as more enjoyable, less tiring and easier to perform than SAP (all p<0.001). It exhibited good separation (area under receiver operating characteristic=0.97) between eyes with advanced field loss (mean deviation (MD) < −6 dB) and those within normal limits (MD > −2 dB). And it was able to flag two thirds of false-positive referrals as functionally normal. However, eight people (10%) failed to complete the test twice, and reasons for this limitation are discussed.

Conclusions Tablet-based eye-movement perimetry could potentially provide a pragmatic way of triaging busy glaucoma clinics (ie, flagging high-risk patients and possible false-positive referrals).

  • diagnostic tests/Investigation
  • field of vision
  • glaucoma
  • psychophysics

Data availability statement

Data are available in a public, open access repository. The complete source code for Eyecatcher is available online at https://github.com/petejonze/Eyecatcher, and is free for non-commercial use.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

British glaucoma services are under strain from an ageing population and more cautious referral policies.1 There is an increasing backlog of appointments,2 and around 20 patients a month suffer severe avoidable sight loss as a result of appointment delays.3 A recent report by the UK Healthcare Safety Investigation Branch (HSIB) found that the lack of timely monitoring is putting patient safety at risk and recommended ‘better, smarter ways of working … to maximise the current capacity’.4 The HSIB report highlighted, in particular, the need to develop new ways to: (1) identify and prioritise patients most at risk of developing sight loss and (2) filter-out false-positive referrals (~40% of new referrals, in the UK5 6 and mainland Europe).7 8

The great majority of glaucoma patients likely to experience statutory blindness within their lifetime already have marked visual field (VF) loss at first presentation to a glaucoma clinic9 (ie, a mean deviation (MD) worse than −6 dB in at least one eye). Conversely, a healthy VF is a key indicator that a patient has been referred in error. A simple VF assessment—conducted immediately as the patient enters the clinic, or as they sit in the waiting room—could therefore be one possible step towards achieving HSIB’s goals of prioritising high-risk patients and flagging-up likely false-positive referrals.

Standard automated perimetry (SAP) is inappropriate for this ‘rapid triage’ role, as it requires specialist equipment and a trained technician. It is itself a key bottle-neck in patient flow, and it is not uncommon for patients to wait several hours for an SAP examination. Our vision is therefore not to replace SAP, but to complement it with a simpler ‘triage’ assessment: one that is easy, inexpensive, and could be used directly in glaucoma-clinic waiting rooms.

A VF triage assessment would not be a like-for-like replacement for SAP. The examination might be simpler and less detailed: with fewer test locations, and/or fixed-luminance stimuli. Instead, it should focus on identifying individuals with no measurable VF loss, and highlighting those individuals most at risk of developing sight loss within their lifetime (eg, younger adults with MD worse than −6 dB in at least one eye).9 Crucially, a triage exam must not add to the existing burden faced by patients and clinicians. In practical terms, this means a test that—unlike SAP10—is extremely easy to administer, does not require bulky or expensive equipment, and does not require a trained operator or dedicated space in which to run.

We recently proposed one such test11: Eyecatcher, an open-source eye-movement perimeter that combines the portability of a tablet computer,12–18 with the ease and comfort afforded by modern eye-tracking and head-tracking technologies.19–23 In brief, the patient sits in front of an ordinary tablet screen, and is asked simply to look at anything they see appear (a largely reflexive response, present from birth).24 Unlike conventional SAP, there is no response button or central fixation target. Instead, the eye-tracker determines where on the screen to present each stimulus in order to stimulate particular retinal locations (ie, relative to the current point of fixation). The eye-tracker then analyses any eye-movements to determine whether the user saw (looked towards) the stimulus. The use of head-tracking also removes the need for head restraints, since the size and location of the stimulus is dynamically scaled to compensate for any changes in viewing distance. In short, Eyecatcher removes headrests, fixation spots and response buttons from perimetry, and as a result delivers a more portable, intuitive and comfortable test, and one which can be run autonomously, since, unlike SAP, it does not require an operator to explain the test or monitor fixation.

We have shown previously that Eyecatcher provides VF data concordant with SAP when applied to a small, self-selecting sample of research participants.12 Here, we examined the feasibility of applying it in a busy glaucoma clinic; and in particular, whether it can be used as a rapid triage test to identify high-risk individuals (MD < −6 dB), and false-positive referrals (no VF or optic nerve abnormalities).

Methods

Participants

Participants were 77 adults, sampled opportunistically from individuals attending routine appointments at the glaucoma clinic of Royal Surrey County Hospital: a secondary care centre in Southeast England. No attempt was made to select or filter participants, and the only inclusion requirement was the capacity to provide written informed consent. The cohort included both returning patients with an established diagnosis, and 11 new referrals (table 1).

Table 1

Breakdown of diagnoses for the full cohort (n=77, including new referrals), and for the subset of individuals who were new referrals to the clinic (n=11)

Eyecatcher

The version of Eyecatcher (V2.0) used in the present study is an updated version of that described previously.12 In brief, participants sat approximately 55 cm in front of a Windows Surface Pro 4 tablet computer (Microsoft, Redmond, Washington, USA), and were asked simply to ‘look at anything you see’ (figure 1A). On each trial, an inexpensive (~£100) clip-on eye-tracker (Tobii EyeX; Tobii Technology, Stockholm, Sweden) was used to position fixed-luminance dots of light relative to the current estimated point of fixation (no central fixation marker), and to determine whether the participant looked towards the target (figure 1D). There was no response button. Viewing distance was not strictly controlled, but was estimated in real-time by the eye-tracker, and this estimate was used to scale the size and location of the stimulus appropriately, prior to each presentation. Patients were not supervised during testing, although the experimenter typically remained nearby (performing paperwork).

Figure 1

Eyecatcher. (A) Apparatus and stimuli. The tablet screen measured 26x17.3 cm (26.6°x17.9° when viewed at 55 cm). The eye-tracker is magnetically attached to the base of the tablet. (B) Test grid, in degrees visual angle. (C) Example output. Green areas indicated hits (target looked at). Red areas indicate misses (target not looked at). (D) Example test sequence. on each trial a single, fixed-intensity light spot was presented, and the computer determined whether or not an eye-movement was made towards it (see online supplementary text for technical details). Note that stimuli were presented relative to the current point of fixation, and so could appear at any screen location throughout the course of the test. See online supplementary video S1 for a recording of an example test sequence.

Stimuli were Goldmann III targets, 6 dB more intense than the expected threshold of a normally sighted adult at each grid location25 (NB: this value was not adjusted for patient age, though such adjustments could be integrated into the test algorithm in future). The −6 dB stimulus intensity was chosen since it has been estimated that 90% of patients at risk of statutory blindness within their lifetime have an MD worse than −6 dB at presentation.9 For other clinical applications (eg, case finding, or home monitoring) a different stimulus intensity may be more appropriate. Further technical details regarding the test and stimuli are given in online supplementary text. The complete source code for Eyecatcher is available online at https://github.com/petejonze/Eyecatcher, and is free for non-commercial use.

The output from Eyecatcher is a retinotopic map, giving the probability of seeing the target at 22 paracentral locations (figure 1B). This map included 11 of the most informative points from the 24–2 grid, as identified by Wang and Henson.26 These 22 probability-of-seeing values were interpolated to provide a continuous probability map (figure 1C), ranging from bright green (‘always seen’) to bright red (‘never seen’ – VF loss). A summary measure of performance was computed by mean-averaging the probability-of-seeing values across all 22 test locations. The resultant metric, ‘mean hit rate’, is a scalar value between 0 and 1 that reflects the amount of ‘greenness’ in the VF plot. It is potentially comparable to MD: the summary measure of VF loss from the Humphrey Field Analyzer (HFA; Carl Zeiss Meditec, California, USA).

Procedure

Within each participant, only a single eye was tested. The test eye was randomly selected, and the fellow eye patched with a cotton pad (monocular viewing). Testing was performed twice consecutively (same eye), to assess test–retest repeatability. To reflect the fact that Eyecatcher is intended as a rapid and easy-to-administer assay, no refractive correction was provided. However, patients were asked to wear their own habitual near-vision spectacles, if available.

Testing took place in whichever space was available that would not disturb other patients (typically an office or consulting room adjacent to the main clinic waiting area). Lights were dimmed where possible, but no attempt was made to maintain a precise light level. No attempt was made to prevent patients or members of staff walking past during testing, and this occurred regularly.

Following the test, participants were given a short usability questionnaire, containing five Likert statements (eg, ‘I found the test easy to perform’). Participants answered each question twice: once for Eyecatcher, and once for conventional SAP.

All testing took part in a single session: generally while the patient waited for an SAP or optical coherence tomography (OCT) assessment, or their subsequent consultation. A minority of individuals had received a mydriatic (tropicamide) by the time they performed Eyecatcher, but most were undilated. This was not systematically recorded.

As part of their scheduled appointment, all participants underwent a full visual assessment by the local clinical team, including a monocular SAP assessment (24-2; SITA Fast) using a HFA. These data were extracted subsequently from patients' medical records.

Analysis

Data are described using non-parametric statistics (eg, medians), with 95% confidence intervals (CIs) computed using bootstrapping (n=20 000; bias-corrected and accelerated method).

Results

Seventy-seven participants (33 female) were recruited, including 11 new referrals (see table 1 for breakdown). All 11 new referrals were judged by their treating physician to be false-positive referrals, with no VF or optic nerve abnormalities. No individuals were excluded from the study. In total, 78 individuals were approached, with only one declining to participate in the study (99% recruitment success). We were therefore able to obtain a relatively representative sample of clinic attendees. Median (IQR) age was 70 (59–77) years.

Completion rate

Sixty-nine patients (90%) completed Eyecatcher twice without difficulty, but eight did not. One early failure was due to a technical (software) error that was subsequently resolved. The remaining seven failures (9%) were due to the eye-tracking hardware being unable to track the eye reliably (returning no data, or data that were sporadic and imprecise). The cause of these eye-tracking failures could not be conclusively established. However, of these seven cases: five may have been due to recent ophthalmic interventions (four had recently undergone cataract surgery, one had complex pathology due to radiotherapy for cavernous meningioma). One failure was believed due to dry eyes (a symptom of an oral steroid, taken for a non-ophthalmic condition). One eye could not be tracked for reasons unknown: the only distinctive feature was pupil dilation with tropicamide with associated blurred vision. However, other dilated eyes were tracked without problem.

Accuracy (concordance with HFA)

Figure 2 shows individual data for 22 patients, including all 11 new referrals (figure 2A), and 11 randomly selected follow-up patients with established diagnoses of glaucoma (figure 2B). By inspection, it can be seen that Eyecatcher was often able to localise scotomas with reasonable spatial precision. Note, for example, the nasal step in ID12, and the inferior temporal scotoma in ID22. In some cases, however, Eyecatcher did appear to underestimate (ID19) or mislocalise (ID17) VF loss.

Figure 2

Individual VF assessments for (A) all 11 new referrals (none of whom was believed to have glaucoma), and (B) 11 randomly selected follow-up patients (all with an established diagnosis of glaucoma). In each case, the HFA grey scale is given on the left and the two corresponding Eyecatcher heatmaps are given on the right (NB: Eyecatcher was performed twice). Red markers highlight regions of the HFA where VF loss was greater than the magnitude of the Eyecatcher stimulus (–6 dB). If concordance between the two tests was perfect, then red markers in the HFA should appear as red shaded regions on the Eyecatcher heatmap. Note that new referral ID 9 was non-glaucomatous, but was a cataract patient with a generalised loss of sensitivity across the visual field (MD = −5.6 dB). MD, mean deviation; SAP, standard automated perimetry; VF, visual field.

As shown in figure 3, there was good association (Spearman correlation: r=0.78; p<0.001) between the overall summary measures from Eyecatcher (mean hit rate) and SAP (MD). Crucially, no individuals with substantial field loss were found to be visually normal by Eyecatcher (figure 3, upper-left region), although some individuals with a healthy VF did score poorly on Eyecatcher (figure 3).

Figure 3

Agreement in overall sensitivity between Eyecatcher (mean hit rate) versus SAP (HFA mean deviation (MD)). Each data point represents a single test/eye from a single patient. Each patient completed Eyecatcher twice, and the data from each run are given separately (circles for run 1, squares for run 2). The solid line shows line of best fit (polynomial spline fit). Any data points falling in the top left region would be considered a false-negative result (good performance on Eyecatcher, despite substantial field loss). SAP, standard automated perimetry.

Sensitivity and specificity

Eyecatcher demonstrated good separation between eyes with moderate or advanced field loss on the one hand (< −6 MD; n=24), and eyes with a VF within normal limits on the other (> −2 dB; n=22), with an area under the receiver operating characteristic (AUROC) of 0.97 (95% CI 0.94 to 0.99) (see online supplementary figure S1).

In terms of identifying unnecessary (false-positive) new referrals, we took a mean hit rate of 0.7 as an arbitrary cut-off point for ‘good’ performance. Eight of 11 new referrals (all of whom were judged to be visually normal) scored above 0.7 (sensitivity: 73%), while 0% of assessments from any eyes with MD < −6 dB scored below 0.7 (specificity: 100%).

Test–retest reliability

Figure 4 shows Eyecatcher’s test–retest repeatability. The 95% coefficient of repeatability (CoR95) for mean hit rate was 0.19 (19% of the test’s dynamic range. Note that Eyecatcher measures the % of fixed-intensity points seen, rather than detection thresholds). For comparison, MD has been shown previously27 to have a CoR95 of ~1.4 dB (~4% of the HFA's dynamic range) at 0 dB MD, increasing to ~5.2 dB (~17% of range) at −30 dB MD. Thus, Eyecatcher was less reliable (repeatable) than conventional SAP. There was no indication of systematic learning or fatigue across the two Eyecatcher test runs.

Figure 4

Bland-Altman plot, showing test–retest repeatability for Eyecatcher (mean hit rate). Grey shaded regions show 95% CIs for the mean. Dashed red lines indicate the 95% limits of agreement. COR, coefficient of repeatability.

Test duration

Median duration (95% CI) was 2.5 (2.4 to 2.7) min for Eyecatcher, and 3.5 (3.3 to 4.1) min for SAP (SITA Fast). This difference was significant (pairwise t-test: p <0.001), indicating that Eyecatcher was quicker. However, SAP tested more locations, and measured threshold at each location, indicating that SAP is more efficient. Note that these times do not include additional overheads, such as the time taken to seat/position the participant, explain the test, or apply refractive correction; all of which were minimal for Eyecatcher, but can be substantial for SAP.

Usability

Participants rated Eyecatcher as more enjoyable, easier to perform, less tiring and less hard to concentrate on than SAP (4 pairwise t-tests: all p<0.001). There was no difference in task-comprehension (p=0.419), which was near ceiling for both tests (see online supplementary figure S2). There were no significant difference in patients’ perceptions of Eyecatcher between new referrals and follow-up patients (five between-subject t-tests: all p>0.05).

Discussion

This study considered the feasibility of using a portable, automated, eye-movement perimeter (Eyecatcher) to perform a rapid assay of VF loss in a real-world clinical setting. In particular, we examined whether Eyecatcher could be used as an initial ‘triage test’, to identify high-risk individuals (eyes with substantial VF loss: MD < −6 dB), and likely false-positive referrals (no VF or optic nerve abnormalities).

Eyecatcher demonstrated good separation (AUROC=0.97) between eyes with moderate-to-advanced VF loss (< −6 dB MD) versus those within normal limits (> −2 dB MD). This is encouraging, as the vast majority of individuals expected to go blind within their lifetime already exhibit moderate or worse VF loss at presentation.9 Eyecatcher might be used to flag up such individuals as ‘high risk’. In terms of false-positive new referrals, 68% were correctly identified as having no substantial VF loss, while crucially 0% of patients with established VF loss (MD < −6 dB) were incorrectly flagged as healthy. In practice, this might translate to two thirds of new referrals being granted an expedited discharge, while the remaining one-in-three patients would continue to wait for SAP as before. Taken together, the results suggest that Eyecatcher—though still in early development—exhibits potential promise as a way of prioritising patients, and filtering-out false-positive referrals, as called for by the HSIB (see the Introduction section).

Crucially, Eyecatcher requires minimal clinical resources, being a fully automated, unsupervised procedure that does not require expensive, specialist equipment or a dedicated testing space (eg, no precise control of lighting, with patients wearing their own glasses as available). Patients also exhibited no difficulties comprehending what to do, despite minimal instruction (‘look at anything you see’). The present data would likely have been cleaner and more impressive if we had used ‘research-grade’ protocols and equipment. However, such a test would be of little practical value as a real-world tool. As it was, it is possible to imagine rows of autonomous Eyecatcher-type devices installed in waiting rooms, or at the entrance to clinic—potentially using the same or similar hardware as current self-service check-in system.

Eyecatcher was fast (~2.5 mins, including eye-tracker calibration), but the HFA (SITA Fast) was only slightly slower (~3.5 mins) despite testing more locations and making more detailed threshold measurements. Furthermore, new HFA algorithms may be even faster than Eyecatcher.28 The goal was not, however, to create a maximally fast test, but one that is easy, intuitive, and fast enough to be run unsupervised. This emphasis on ‘human factors’ was reflected also in the fact that patients rated Eyecatcher easier and less tiring than conventional, button-press perimetry, and stands in stark contrast to SAP, where a technician must be continuously present to explain the test and monitor performance, and where even well-practised patients can find the test challenging10 or confusing.

Limitations

This study was intended only as an initial feasibility assessment. It should not be taken as a formal evaluation of diagnostic accuracy, which would require a standardised protocol,29 and a much larger, multicentre, prospective sample. A more comprehensive evaluation would also consider economic utility, and might examine test performance with different target intensities (fixed here at −6 dB). A dimmer target might, for example, be beneficial if attempting to detect very early signs of glaucoma.

Regarding Eyecatcher itself, the test is limited in three main ways. First, seven patients (9%) could not complete the test due to the hardware being unable to track their eyes reliably. In five cases the difficulties were likely caused by recent ophthalmic interventions (eg, cataract surgery). Such patients will be ‘in the system’ already and are not the sorts of new referrals that a rapid triage test such as Eyecatcher would be primarily targeted at. In the other two cases, however, the cause of the problem was either unknown (n=1), or appeared to be due to a side effect of a common medication (dry eyes; n=1). These failures are concerning, but it is hoped that the reliability of low-cost eye-tracking technologies will improve in time. In the meantime, such individuals could simply continue to perform SAP (as they do currently), or could perform a button-press version of Eyecatcher (see online supplementary text).

Second, since Eyecatcher requires an eye-movement response, it is unable to test central vision (eg, the most central test location was ±3° horizontal, ±6° vertical). This is unfortunate, since central vision is increasingly thought to be affected in early glaucoma.30 More precise eye-tracking, or an alternative response measure, would be required if wanting to assess more central VF locations in future.

Third, when it came to identifying false-positive referrals, Eyecatcher exhibited high specificity (identifying 100% of eyes with MD < −6 dB), but limited sensitivity (only 68% of false-positive referrals were correctly identified as having no measurable field loss). This asymmetry was partially by design. In triage, the cost of misidentifying a diseased eye as healthy (whereafter a new patient might be wrongly discharged) is far greater than the cost of misidentifying a healthy eye as diseased (whereafter the patient would simply continue to wait for a more detailed assessment). Eyecatcher, therefore, required multiple negative responses to register a location as ‘missed’, while a single positive response was sufficient to classify a location as ‘seen’ (see online supplementary text). It might be possible to improve sensitivity in future through improvements in test design or via increased test duration. However, the immediate practical corollary is that the Eyecatcher, as it is currently, shows promise as a triage measure, but would make for a poor general screening device (ie, where both high sensitivity and specificity is required).

Further possible applications and future work

Eyecatcher was intended as a rapid triage measure for use in clinics. Given its portability and ease of use, however, Eyecatcher might also be useful in situations that require VF testing outside of traditional eye clinics (eg, home-monitoring, domiciliary services, or case finding in developing rural communities). It may also be useful for performing VF assessments in individuals with limited physical or cognitive abilities (eg, infants or stroke patients). For people interested in adapting or developing Eyecatcher further, we have made all of the source code freely available online (see the Methods section).

Data availability statement

Data are available in a public, open access repository. The complete source code for Eyecatcher is available online at https://github.com/petejonze/Eyecatcher, and is free for non-commercial use.

Ethics statements

Ethics approval

This study was approved by the National Health Service Health Research Authority (IRAS ID: #230440) and was conducted in accordance with the Declaration of Helsinki.

References

Footnotes

  • Twitter @petejonze, @eyesurgeondan, @crabblab

  • Presented at Elements of this work were presented at ARVO 2019.

  • Contributors PRJ, DL and DPC conceived the designed the study. PRJ developed the test materials and collected the data. PRJ analysed the data and wrote the manuscript. All the authors contributed towards finalising the draft.

  • Funding This study was funded by a Fight for Sight (UK) project grant (#1854/1855).

  • Disclaimer The funding organisation had no role in the design or conduct of this research.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • At a glance
    Frank Larkin