Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Hypothesis testing is a basic tool of epidemiology: outcomes in individuals are apparently random until we apply some form of averaging, and statistics provides us with the sharpest tools for studying populations without being misled by chance. Hypothesis testing is at the very base of understanding, along with modelling and parameter estimation. An article in this issue by Lenters et al 1 uses simulation to address some questions which should be well understood in the epidemiology community, but sadly are not.
Any test is characterised by two parameters.
Sensitivity: the probability of a positive conclusion given a specified real effect. In the context of hypothesis testing, this is called power.
Specificity: the probability of a negative conclusion given there is no real effect. For hypothesis testing, we call (1−specificity) the type 1 error.
The interpretation of a positive or negative test result depends not only on those numbers but also on the population prevalence: very high specificity is needed when testing for a rare disease. A disease with prevalence of 1 per 1000 with a test having 95% specificity will result in 49 of every 50 diagnoses being false (false discovery probability (FDP) of 98%), a situation few clinicians would find acceptable. Yet, this is the same as conducting …
Contributors GB is the sole contributor to this commentary.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent Not required.
Provenance and peer review Commissioned; internally peer reviewed.