Article Text

Download PDFPDF
Bias: adding to the uncertainty
  1. JOHN M SPARROW
  1. Division of Ophthalmology, University of Bristol, Bristol Eye Hospital, Bristol BS1 2LX
  2. Department of Epidemiology and Public Health, Robert Kilpatrick Clinical Sciences Building, University of Leicester, Leicester LE2 7LX
  1. JOHN R THOMPSON
  1. Division of Ophthalmology, University of Bristol, Bristol Eye Hospital, Bristol BS1 2LX
  2. Department of Epidemiology and Public Health, Robert Kilpatrick Clinical Sciences Building, University of Leicester, Leicester LE2 7LX

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Through their exceptionally thorough follow up, Pennefather et al (this issue, p 643) have presented us with a fine example of the impact that bias, or systematic error, can have on the results of an epidemiological study. They found a higher rate of ocular abnormalities in children who were hard to locate or whose parents were reluctant for them to attend for follow up, suggesting that a less comprehensive survey would have underestimated the extent of disease.

    Epidemiological studies are subject to two types of error, systematic and random, and of the two, the systematic errors are by far the more problematic. Statistical theory offers us an abundance of methods for quantifying and allowing for the impact of random error by using standard errors, confidence intervals, or p values. From this theory we know that as the sample increases so the size of the random or sampling error will decrease, often in proportion to the square root of the sample size. Bias, however, is much more difficult to handle because it is generally unmeasured and, being systematic, it does not decrease as the …

    View Full Text