Article Text

Download PDFPDF

Ophthalmic statistics note 6: effect sizes matter
  1. Jonathan A Cook1,
  2. Catey Bunce2,
  3. Caroline J Doré3,
  4. Nick Freemantle4,
  5. on behalf of the Ophthalmic Statistics Group
  1. Correspondence to Dr Jonathan A Cook, Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology, and Musculoskeletal Sciences, Centre for Statistics in Medicine, Oxford OX3 7LD, UK; jonathan.cook{at}ndorms.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Why statistical significance alone is not a sufficient basis to interpret the findings of a statistical analysis

Medical statistics plays a key role in clinical research by helping to avoid errors of interpretation due to the play of chance. It is, however, critical to understand the limits of what statistical analysis provides and interpret the findings accordingly. Statistical analysis can summarise the statistical evidence but it cannot tell us whether a difference is important per se, as clinical judgement is needed. Suppose we have a clinical trial which compares two drugs for reducing the intraocular pressure (IOP) in the eye and there is evidence of a statistically significant difference in favour of drug A. Does this mean A is superior to B? Perhaps, if the average difference is 5 mm Hg but what if it is only 0.1 mmg Hg? We would be very unlikely to conclude clinical superiority under these circumstances, or at least it would be necessary to take account of differences in other key outcomes such as adverse events before reaching such a conclusion. It might be thought that the p value from a statistical hypothesis test can serve this purpose but this is not the case. Solely considering whether a p value is ‘statistically significant’ is not a sufficient basis to interpret the findings of a statistical analysis. There is a number of reasons why this is the case.1 ,2

  • First, when a statistical hypothesis test is performed we are usually implicitly testing for evidence of a difference of any magnitude. To equate this difference to clinical importance is to view all statistically significant differences as clinically important and of equal value (at least in a practical sense). Clearly, as in the IOP example above, this may not be the case.

  • Second, the p value is not a direct measure of the strength of evidence for the null hypothesis. Instead, it is the probability …

View Full Text