Article Text

Download PDFPDF

When gold standards change: time to move on from Goldmann tonometry?
  1. Gus Gazzard1,2,
  2. Hari Jayaram2,3,
  3. Ana M Roldan4,
  4. David S Friedman5
  1. 1 Glaucoma Service, Moorfields Eye Hospital, London, UK
  2. 2 University College London Institute of Ophthalmology, London, UK
  3. 3 Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
  4. 4 Glaucoma Service, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
  5. 5 Ophthalmology, Harvard University, Cambridge, Massachusetts, USA
  1. Correspondence to David S Friedman, Ophthalmology, Harvard University, Cambridge, MA, USA; David_Friedman{at}MEEI.HARVARD.EDU

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The English ophthalmologist Richard Banister was one of the first to report palpable hardness of the normal-appearing eye in 1622.1 Tonometry is an essential measurement in the assessment of eye health and a key component of glaucoma diagnosis and treatment, with intraocular pressure (IOP) remaining the only modifiable risk factor for glaucoma.2 Goldmann applanation tonometry (GAT) is the currently accepted ‘gold standard’ and approximates IOP by measuring the force needed to flatten a fixed area at the corneal apex.3 4 To do so, GAT makes important assumptions about corneal thickness and behaviour, assumptions that are not met in a significant proportion of patients.5

GAT has been used for nearly 70 years and is considered the reference standard4 for IOP measurement largely owing to the fact that nearly all clinical trial protocols have relied on GAT. The technique is widely integrated within clinical practice, and a certain amount of inertia has prevented clinicians from shifting to newer, possibly better, technologies. This resistance is perhaps analogous to the slow adoption of superior logarithm of the minimum angle of resolution measures of visual acuity, even when the limitations of Snellen were well established.6 The relatively low cost of GAT also contributes to its ongoing appeal.

Yet GAT has significant limitations that make it a suboptimal, far from ideal reference standard. First, the results of GAT are influenced by corneal properties, underestimating manometric IOP in thin corneas, overestimating in thick corneas5 7 and varying unpredictably with difficult to measure properties such as stiffness. Second, GAT requires topical anaesthesia, which, depending on regulations, limits the personnel able to carry out the test. Third, GAT requires a slit lamp (Perkins can be done without a slit lamp, but a separate device needs to be purchased and measuring IOP with the Perkins can be difficult). Fourth, GAT is subjective and there is no quality metric to alert the physician to a poor measurement. Finally, even when measured by qualified staff on the same person at more or less the same time, GAT results vary to a degree that can be clinically significant: the 95% repeatability coefficient (range within which 95 of 100 readings will fall) is ±2 mmHg.8 Other important considerations include the need to train personnel in how to perform the examination and the ongoing need for calibration of the tonometer (which is often omitted).9 10

Should Goldmann applanation be the reference standard in 2020 now that so many alternative approaches to IOP measurement exist? While many tonometers can reproducibly measure IOP,11 12 a body of evidence is accumulating that cornea-corrected IOP as measured with the Ocular Response Analyzer (ORA, Reichert Technologies, Depew, NY, USA) is a better measure of IOP than GAT and should be used more widely.

The ORA is a non-contact device that measures the flattening of the cornea by a fixed force of air both as the cornea flattens inwards and as it returns to its normal shape. The difference in these measures provides an estimate of the shock absorption properties of the cornea—hysteresis,3 which has been shown in numerous publications to be an independent predictor of visual field progression in individuals with known glaucoma or ocular hypertension.13–17 The financial outlay in purchasing a new tonometry device may be an initial disadvantage in times of fiscal austerity. However, using the ORA is simple and does not require the use of topical anaesthesia. IOP assessments can therefore be performed by ancillary staff, making this technique applicable to novel models of delivering glaucoma care.18 There are no disposable parts, so once the device is purchased, there are few marginal costs (electricity, occasional maintenance) or cross-contamination risks. Micro-aerosol formation from non-contact tonometry19 might be considered a potential hazard in the current COVID-19 era.20 However, virus particles have been detected only in ocular secretions from patients with active conjunctivitis,21 and therefore the use of this technique in quiet eyes would seem to confer minimal risk. It is worth noting that the ORA is validated for IOP levels between 7 mmHg and 60 mmHg; however, its accuracy for the extremes of low and high pressure that may be encountered in surgical practice in particular is not yet known and GAT will continue to have a role in validating measurements in this specialist area.

There is a myriad of reasons to abandon GAT and shift to ORA, but most importantly, we rely on IOP as a guide to caring for patients. Ultimately, the measure of IOP that best predicts who will get worse is the device most able to help us make the right decisions. A recent prospective observational study22 and a large randomised controlled clinical trial23 both showed that over two-thirds of the variance in rates of visual field progression remained unexplained by IOP alone, making this a poor predictive feature used in isolation. However, both studies showed that the ORA-derived corneal compensation IOP (IOPcc) was superior to GAT in predicting rates of glaucoma progression. These observations most likely reflect the fact that IOPcc measurements are more closely related to true IOP measurements, but further studies are necessary to confirm this.

Why are we persisting in using GAT clinically? The test itself is relatively time-consuming, physicians often repeat the measurement because they cannot fully trust a technician, it slows down the clinic requiring technical staff to have slit lamps and place drops in patient’s eyes and worse, it may be giving us a false sense of security. ORA is a clearly better alternative that provides more information about who is getting worse. There may be other alternative tonometers comparable to the ORA that require further evaluation, but nevertheless, it is time for a change.

REFERENCES

Footnotes

  • Twitter Ana Roldan @anaroldanvasq.

  • Funding GG is employed by UCL and supported by grants from the National Institute for Health Research (HTA 09/104/40), Moorfields Eye Charity, British Council to Prevent Blindness, Fight for Sight and the International Glaucoma Association. HJ is supported by the Moorfields Eye Charity. GG and HJ are grateful for the support of the National Institute for Health Research Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital and the UCL Institute of Ophthalmology. DF receives funding from the National Eye Institute and the Harvard Catalyst. The views expressed in this paper are those of the authors and not necessarily those of any funding body or the UK Department of Health.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement No data are available.