Article Text

Download PDFPDF

On the importance of quantification
  1. Dana Loomis
  1. Correspondence to Professor Dana Loomis, Department of Epidemiology, University of Nebraska, Nebraska Medical Center, Omaha, NE 68198, USA; dana.loomis{at}unmc.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Papers published in Occupational and Environmental Medicine (OEM) often figure prominently in risk assessments carried out by governments and independent agencies. For example, two papers we published provided key evidence in the recent determination by the International Agency for Research on Cancer that radiofrequency electromagnetic fields are possibly carcinogenic.1 ,2 Risk assessments are increasingly required for setting occupational and environmental health standards, and their quality depends heavily on the availability of quantitative data on exposure and disease occurrence. Quantitative exposure–response data are valuable in the hazard-identification stage of risk assessment in which they are used to assess the evidence for causality, and they are essential for quantitative assessments that seek to estimate the risk arising from a unit of exposure. For these and other reasons, occupational and environmental epidemiologists have emphasised obtaining high-quality exposure data since the 1980s. Nevertheless, we still receive many papers that do not include quantitative exposure data or do not present them in a form that is useful for risk assessment, even when suitable data appear to be available to the authors.

We encourage authors to report quantitative exposure–response analyses based on internal comparisons whenever possible. Papers reporting only contrasts of incidence or death rates in workers versus the general population, or comparisons of risk according to qualitative indicators such as job title or location are typically given low priority unless they report novel associations of unusual interest. Papers reporting surrogate exposure indicators like duration of employment or ordinal exposure categories are more likely to be reviewed, but are often questioned because these exposure indicators can be hard to interpret. Papers that include quantitative exposure data that are not used to their full potential can also struggle in review.

We often see papers that lead us to think the authors could have done more with the data. For example, some show exposure measurements that demonstrate a gradient of exposure but do not relate the exposures to risk, while others use exposure measurements only to create ordinal categories. Practices like these frequently lead to requests for major revision, if not rejection. Another common practice that reduces the usefulness of exposure data is to report exposure–response analyses based on categorised data with incomplete information about the categories. The cut-off values defining categories should always be given, and adding category means or midpoints adds more information, which allows an exposure–response curve to be constructed from the categories.

Surprisingly few submitted papers include fully quantitative exposure–response analyses yielding coefficients with units of estimated risk per unit of exposure. It is not clear to us why so many authors shy away from treating exposure as a quantitative variable even when ample exposure data are available. Perhaps it is a tradition-bound preference for stratified data, or perhaps a result of lack of knowledge about how to handle issues like nonlinearity in the effect of exposure. Regardless, we encourage authors to take advantage of the power of modern computing and the methodological advances described in recent papers, including a number in this journal,3 ,4 to add quantitative exposure–response analyses to the papers they submit to OEM. We also prefer that results are presented as epidemiological effect measures (eg, relative or absolute rates) with 95% CIs when possible. Observing these guidelines will help authors navigate the review process more efficiently and increase the impact of papers published in the journal.

References