Article Text

Download PDFPDF

Should we share ideas or measurement data?
  1. Hans Kromhout
  1. Correspondence to Dr H Kromhout, Division of Environmental Epidemiology, Institute for Risk Assessment Sciences, Utrecht University, Utrecht, PO Box 80178, Utrecht 3508 TD, The Netherlands; h.kromhout{at}uu.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The ultimate challenge of occupational exposure assessment will persist as long as we are interested in the health outcomes arising from exposure to a wide array of chemical, biological, physical and even psychosocial agents in the workplace. Regardless of whether or not we define the complex interaction between an individual human being and its environment as ‘the exposome’,1 this does not lift the heavy burden of exposure assessment from those engaged in occupational and environmental epidemiology.

In their paper on turning industry specific job-exposure matrices into job-specific modules to improve on exposure assessment in community-based studies, Behrens et al2 (see page 444) seem to suggest that by sharing exposure ideas and tools our problems will be solved. Of course, sharing tools and relevant questions derived from industry-based studies is a good idea, as is the creation of a web-based depository for these questions and tools.3 But what is gained when we ask questions in order to assess exposure? Is it possible to get the right answer from a question that is unlikely to be answered correctly? Some of the proponents of ‘the exposome’ are rather critical of this and do not think so: ‘Also, information about environmental exposures in epidemiologic studies is generally derived from questionnaires that are ill suited for assessing thousands of potentially causative exposures.’4

The inherent problem when we ask these types of questions is the inability of the responder to accurately remember what happened in the past. For the most part, our brains perform rather poorly when asked to retrieve information from memory. It is well known that recall accuracy and confidence are good when we are able to directly retrieve the information from our brains, but poor when we have to reconstruct on the basis of cues.5 Unfortunately, within the field of retrospective exposure assessment based on worker interviews, not much research has been dedicated to assessing the reliability and accuracy of the answers we obtain from our study populations or their proxies. In a unique study among hairdressers by Kersemaekers et al,6 it was shown that after only 1–2 years, hairdressers retrospectively overestimated numbers of customers and hairdressing tasks performed. The number of reported customers was almost twice as high as registered and tasks resulting in solvent exposure (eg, hair dying, wave setting and use of hair sprays) were on average reported to be performed 2–3 times more often than actually occurred at the time of measurement. The presence of ventilation devices, number of colleagues working alongside the respondents and type of salon (male/female/both) were however accurately reported. A simple classification scheme for exposure to solvents based on the accurately recalled presence of ventilation devices appeared to be predictive of measured exposure to ethanol 1–2 years earlier.7

In principle, it is a good idea to use questionnaires and job-specific modules filled with questions on proven determinants of exposures rather than items identified by individual experts as being important to occupational exposure. Unfortunately, the answers to some of these questions will be biased, since by definition they cannot be accurately derived from memory. These questions though they consider determinants of exposure may be better left unasked.

In many industry-wide exposure measurement studies, factors affecting exposure have been identified for an array of agents.8–11 However, these determinants are often only linked to exposure in an empirical way rather than in a more deterministic physical manner. For instance, in one of the substudies of the hairdressers' study a clear inverse relationship was observed between ethanol concentration and indoor salon temperature. In a second study in another town and season, indoor temperature no longer proved predictive for solvent exposure. In fact, the empirical model derived in the first study performed rather poorly in the second study. Extrapolations of findings from empirical statistical exposure models are by definition questionable. This was further demonstrated in two exposure surveys in the rubber manufacturing industry in The Netherlands which were performed 9 years apart. The empirical model derived from the exposure data in the later study was not better at predicting exposures from 9 years earlier than just plant and department. In addition, exposure levels were considerably underestimated based on the determinants, most likely because exposure source strengths diminished over time.12

Instead of assigning exposure levels based on the study population's responses to our questions, we could directly assign quantitative levels of exposure (which is the ultimate goal of exposure assessment) by linking the results of exposure measurements to the jobs performed in certain industries and time periods. Then all we would need from each individual in our study population is a job history, something that anybody should be able to report with enough accuracy. We know this is possible within industries, but what about the community at large?

In the multinational pooled analysis of community-based lung cancer case-control studies (SYNERGY) this is exactly what has been attempted and achieved. Peters et al13 collated an immense database with the results of almost 360 000 measurements of five major lung carcinogens across multiple countries and industries. Modelling of part of this database has thus far resulted in a quantitative country-, job- and time period-specific exposure matrix for exposure to crystalline silica.14 Application of this exposure matrix to the almost 38 000 cases and controls resulted in a quantitative monotonic increasing exposure response curve for exposure to respirable crystalline silica and lung cancer risk, which appeared to be comparable with what had been reported within industry-based cohort studies.15 Of course, measurement results will not be necessarily representative and will be biased in multiple ways as well, but with proper evaluations and systematic expert input that can be challenged, the inherent limitations in exposure-response modelling for community-based studies should be surmountable. The good news is that this can be accomplished without asking questions that individuals cannot reliably answer.

In conclusion, along with sharing ideas and tools we should promote and pursue the sharing of occupational exposure measurement data in order to surpass our present-day limitations when assessing the health effects of occupational exposure in community-based epidemiological studies.

Acknowledgments

The author is grateful for editorial input from Amy Hall.

References

Footnotes

  • Linked article 100305.

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles