Article Text
Statistics from Altmetric.com
Population screening programmes demand considerable commitment and resources, not only from healthcare providers but also from the population to be screened
The origins of population screening have been attributed to events at the beginning of the 20th century when, in the aftermath of the Boer War, steps were taken to improve the health of British children and, therefore, in due course, of recruits to the British army. The principles underlying screening, and the criteria outlined by Wilson and Junger to be applied to proposed programmes,1 have stood the test of time and will be well known to most readers. In truth these criteria are not fully met by most, if not all, medical screening programmes. Even if proponents are convinced that they can reliably identify an asymptomatic early stage and apply a treatment that is more effective when received early, the process of screening often brings to light ambiguous states of health which have uncertain natural histories.
Despite these difficulties, screening remains a popular concept. Nobody wants to get ill, or for their children to be ill, and if a simple test and acceptable preventative treatment can stop this happening, so much the better. No one could argue that detection of early, localised, asymptomatic breast carcinoma, which would have metastasised by the time symptoms had occurred, is not a good, and politically popular, thing.
Things are, however, rarely so clear cut. Population screening programmes demand considerable commitment and resources, not only from healthcare providers but also from the population to be screened. The case where …