Article Text

Download PDFPDF

The ghost of methods past: exposure assessment versus job–exposure matrix studies
  1. Igor Burstyn
  1. Correspondence to Igor Burstyn, Department of Environmental and Occupational Health, School of Public Health, Drexel University, 1505 Race Street, Philadelphia, PA 19102, USA; igor.burstyn{at}drexel.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

First introduced 25 years ago, expert assessment of occupational exposure was an important innovation for community-based case–referent studies in occupational epidemiology.1 Here is a simplified scheme of how expert assessment is now practiced: detailed interviews of subjects are reviewed by ‘exposure assessment experts’ who then pass on to epidemiologists their best guesses of exposure status, which are then frequently used as if they are error-free in epidemiological analyses. This method is applied when there are no relevant measurements of exposure that are deemed suitable for mathematical/statistical exposure modelling. The article by Bhatti et al 2 presents this paradigm as state-of-the-art and attempts to support this a priori supposition in the context of a gene–environment interaction study. Therefore, it is legitimate to enquire how comfortable occupational health researchers should feel about such a conjecture and to gauge whether any alterations to the original concept are desirable.

Contrary to the claim by Bhatti et al,2 Teschke et al 3 never concluded that that expert assessment was “the best method”, but were careful to point out—too tactfully—that none of the exposure assessments methods commonly practiced in community-based case–referent studies of occupational aetiology were particularly good. The sentiment that expert assessment was not a panacea was forcefully reiterated in a commentary by Professor Kromhout3 on the outstanding review by Teschke et al. 3 Clearly, leading exposure assessors have grave doubts about value of expert assessment methodology as it is currently practiced.

Next we must examine what new evidence Bhatti et al 2 bring to the debate on the merits of occupational exposure expert assessment. Their setting is a community-based case–referent study of the risk of brain tumour due to occupational lead exposure. From detailed interviews of subjects, the authors derived exposure estimates by means of expert assessment and a job–exposure matrix (JEM). Measures such as agreement of the two methods beyond chance presented in the paper2 are not relevant to the main issue at hand, because the relationship of assessment by either method to true exposure is unknown. (Note that because the same key elements of an interview were used to derive exposure estimates from both expert assessment and the JEM, the two estimates would tend to agree with each other due to shared error even if the two measures were independent of true exposure. This is an example of measures of agreement of instruments at best producing overly optimistic estimates of reliability/misclassification error.) Next, Bhatti et al 2 (and others before them) argue that in the presence of overwhelming evidence that an exposure causes a disease, the ability to detect a known exposure–disease association with one method of exposure assessment, but not with its competitor, proves the first method's supremacy. Unfortunately for the comparison of exposure assessment methods, the authors are not able to claim that lead definitely causes brain tumours. The data themselves do not reveal any associations: among 48 ORs presented, only one (2%) has p<0.05 on the strength of five cases and three referents. The suggestion of an interaction of lead exposure with the genotype is apparently based exclusively on the result for meningioma from only one stratum of cumulative exposure as assessed by experts, but the 95% CIs of the two OR estimates for different genotypes overlap: 0.3 to 4.8 versus 2.4 to 72.9. In light of this, and being mindful of the warning that imprecise estimates signal results that are not likely to be credible,4 it is reasonable to disagree with the conclusion that evidence of effect modification was found in these data. The 95% CIs for the ORs for the highest cumulative exposure stratum estimated with expert assessment and the JEM for ALAD2 carriers largely overlap: 2.4 to 72.9 versus 0.1 to 12, which makes the difference in their point estimates immaterial. The risk of glioma was not associated with any of the exposure metrics. Therefore, if we accept the author's argument that expert assessment is superior to a JEM in this study,2 then we are also asked to believe that the risk of glioma is not (or is much more weakly) associated with lead exposure. Thus, I favour a more modest interpretation of these results, namely that neither exposure assessment methods yield a result that adds support to either (a) an overall association of adult brain cancers with lead exposure, or (b) modification of the association by ALAD genotypes. This may well be due to errors in exposure assessment, but any such argument remains to be substantiated. Consequently, there appears to be no reason to alter the earlier damning verdict on expert assessment3 in light of the report by Bhatti et al 2 (or any other paper). Of course, we all agree that in general “high quality exposure data are likely to improve the ability to detect genetic effect modification”.2 What is desperately required is the realisation that expert assessment does not produce “high quality exposure data”.2

The ghost of a method whose time has passed still haunts occupational epidemiology. It has already consumed a lot of time and resources and given back little more than the ability to produce negligible-to-weak, instead of strongly informative, evidence. And what recourse do we have to improve the situation? Occupational hygienists (exposure assessors) seem to have done their utmost with the nearly impossible task that was delegated to them by epidemiologists: to be omniscient about working conditions in the past even when evidence is scant. Certainly more can be done to document how experts assess exposures and what aspects of occupational histories are actually influential in determining subjects' exposure status, as well as to calibrate expert assessors' ratings.3 Incorporating biological monitoring into exposure assessment, as suggested by Bhatti et al,2 is also desirable. However, the most profound change has to come from epidemiologists who must abandon behaviour that is consistent with a naive worldview that measurement error is best treated like an elephant in the room. There is no evidence that measurement error correction induces bias, which is the main voiced reason for avoiding such methods.5 It is also very unlikely that misclassification error that arises by complex mechanism of implicit and explicit dichotomisation is non-differential,6 as is commonly claimed.7 Therefore, let us stop ignoring complexities in data analysis that arise due to measurement errors8—they will always be a formidable presence in observational studies—and embrace developments in statistics that allow more appropriate appraisal of uncertainty and bias.

It may well be that, when there are no measurements of exposure, expert-formulated models are a practical and sensible way to systematically and transparently document exposure assessment.9 ,10 (If exposure measurements exist, the prospects of obtaining a defensible exposure estimate are much brighter.11) However, this must be coupled with exposure validation or reliability studies, followed by quantitative acknowledgement of the imperfections of exposure estimates in epidemiological analysis. From this perspective, the work of Liu et al 12 in the context of a community-based individually matched case–referent study of occupational exposures is promising. Other work on exposure misclassification in matched case–control studies may also prove relevant.13–15 The time is right to banish the ghost of methods past and, while thanking it for lessons learnt, embrace the promise of methods yet to come.

Acknowledgments

The author thanks Dr Jennifer A Taylor for the thoughtful comments and edits, but accepts sole responsibility for the content of the article.

References

Footnotes

  • Linked articles 048132.

  • Competing interests None.

  • Provenance and peer review Commissioned; externally peer reviewed.

Linked Articles