Statistics from Altmetric.com
Medical research is conducted to answer uncertainties and to identify effective treatments for patients. Different questions are best addressed by different types of study design—but the randomised, controlled clinical trial is typically viewed as the gold standard, providing a very high level of evidence, when examining efficacy.1 While clinical trial methodology has advanced considerably with clear guidance provided as to how to avoid sources of bias, even the most robustly designed study can succumb to missing data.2 ,3 In this statistics note, we discuss strategies for dealing with missing data but what we hope emerges is a very clear message that there is no ideal solution to missing data and prevention is the best strategy.
A senior colleague asks me to critique a publication of a randomised, controlled clinical trial comparing two drugs which aim to reduce intraocular pressure (IOP) in patients with primary open angle glaucoma. One eye per patient has been analysed and results are provided for IOP at 6 months. The study presents data on 147 subjects treated with drug A and 145 subjects with drug B. The mean pressure in patients on drug A is lower than in those on drug B, with an estimated treatment difference of 3.1 mm Hg, 95% CI (2.5, 3.8). A p value of <0.001 is provided. It seems clear that drug A is more efficacious in reducing IOP at 6 months than drug B, but does this mean that I am correct in deducing that A is better than B and therefore that patients should be given drug A?
Something about the numbers doesn't seem quite right: 147 versus 145 where I had expected equal numbers in the two groups. I learn (via the internet) that the researchers may have used simple randomisation in which case chance imbalances can and do …