Article Text

Download PDFPDF

Advancing global public health with occupational epidemiology
  1. David Kriebel
  1. Public Health, University of Massachusetts, Lowell, Massachusetts, USA
  1. Correspondence to Dr David Kriebel, Public Health, University of Massachusetts, Lowell, MA 01854, USA; david_kriebel{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The 2023 International Epidemiology in Occupational Health (EPICOH) meeting in Mumbai, India focused on the theme: ‘Occupational Health Epidemiology: Need for Concerted Action’. I had the honor of being invited to deliver the Lifetime Achievement Oration. I prepared for this daunting task by considering why occupational epidemiology is important for global public health and what the field’s distinctive contributions are. Here, I summarise my responses to these questions, drawing particularly from the COVID-19 pandemic. My remarks were addressed to occupational epidemiologists, but I believe they are relevant to those in overlapping fields including environmental epidemiology, occupational hygiene and occupational medicine.

All of public health, including epidemiology, is at an inflection point as the world emerges from the COVID-19 pandemic. We, in occupational epidemiology, have an opportunity to use the increased visibility and credibility of our pandemic work to advocate for our field and for our students as they look towards their future career paths.

I believe there are two reasons why occupational epidemiology must continue as a fundamental and identifiable component of global public health. First and foremost, occupational epidemiology contributes to public health by identifying the hazards to working people. This primary function of our field continues to be needed as new hazards arise and old ones continue to cause heavy burdens to human health. Examples include heat illness, the effects of precarious work, carcinogenic exposures of many sorts—for example, to firefighters, endocrine disrupting effects of a wide variety of modern synthetic chemicals and the cardiovascular effects of psychosocial stress. Many of these hazards exist outside of workplaces, too, but workers often experience higher concentrations and longer durations of exposure than the general population.

The pandemic has highlighted a second critical contribution of occupational epidemiology which extends beyond the work environment. This contribution consists of the flexible, interdisciplinary perspective that we bring to the search for causality in the link between exposures and diseases. The occupational epidemiology perspective on prevention comes from interdisciplinary training and practical experiences in workplaces. We learn from practical experience that the logistical, financial and political challenges that must be faced when designing and conducting epidemiological studies in the work environment are often very different from the challenges of our colleagues in other branches of epidemiology. The work environment is often a contested terrain in which conflicts of interest can be severe and commonplace. The hazardous exposures we study are generated from economic productivity, and our study subjects are often unwitting victims and bystanders with little to no control over those exposures. We learn to navigate this difficult terrain and to do the highest quality science possible to inform prevention through close collaborations with other disciplines and a wide range of different types of evidence.

We saw during the pandemic that the training and practical work environment experiences of occupational epidemiologists as well as our close colleagues in occupational hygiene, occupational medicine, engineering and environmental epidemiology were invaluable for formulating rapid responses to a highly uncertain and deadly new hazard. The pandemic led to many opportunities to see how science and scientific uncertainty impacted almost immediately the public debates and government policies.

We all remember vividly those first weeks, in January 2020, when it was not clear whether there was human to human transmission of this new virus emerging in Wuhan. Once this was established, we needed to know, but did not know, how the virus was spreading. Is it only through surface contact and droplets or is there airborne transmission? Are asymptomatic infected people shedding virus? It took time and many bits of evidence, all with considerable uncertainty, to answer these critical questions. Social distancing was also debated, as was the role of ventilation in reducing risk indoors, and then there were many questions about the necessity and effectiveness of masks and respirators. For each of these questions, the initial answers were very uncertain and then gradually over time the science provided clarifying evidence. And in each case, occupational epidemiology was a very critical contributor to the body of evidence.

There were, and still are, sharp disagreements on many of these questions. Scepticism, of course, is a healthy and normal part of science and it is usually resolved by additional evidence. In a pandemic, there was a critical need for information quickly, making it hard for the normal back and forth of scientific debate to settle. We saw something else, though, that exacerbated the debates, beyond normal scientific scepticism, beyond the unfortunate but perhaps inevitable politicisation of the scientific debates. There were often systematic differences between scientific disciplines in their interpretations of the evidence and, in particular, in the ways that evidence is used in causal inference. This is a key point that I want to discuss further; occupational epidemiology brings an important perspective on causal inference to larger public health questions like those that arose during the pandemic.

To illustrate this point, here is one case study with which we are all very familiar. How effective are masks and respirators in preventing infection by the SARS-CoV-2 virus? This debate is still ongoing, and in February 2023, weeks before the Mumbai EPICOH meeting, the controversy arose again when a Cochrane Library review was published.1 This Cochrane review generated a great deal of publicity in the USA, and discussion in the press. It revived sharp and angry political arguments on both sides of the sadly very polarised world in which we find ourselves.

An opinion column in the New York Times on 21 February, responding to this Cochrane review, announced: ‘The mask mandates did nothing. Will any lessons be learnt?’2 The article began: ‘The most rigorous and comprehensive analysis of scientific studies conducted on the efficacy of masks for reducing the spread of respiratory illnesses—including COVID-19—was published late month.’ The Cochrane review considered only the randomised controlled trials on medical/surgical masks, high-efficiency respirators (called N95 in the USA, P2 in Europe) or hand hygiene, and their effectiveness in reducing the spread of respiratory viruses including influenza and others as well as SARS-CoV-2.

The Cochrane review concluded: ‘We are uncertain whether wearing masks or N95 respirators helps to slow the spread of respiratory viruses based on the studies we assessed.’ and at another point in the summary: ‘Compared with wearing no mask in the community studies only, wearing a mask may make little to no difference in how many people caught a flu like or COVID like illness…’ These statements are carefully worded, and so perhaps the bold statement in the New York Times that mask mandates ‘did nothing’ should be blamed on journalists doing their job, being provocative.

But in this case, the lead author of the Cochrane Review was clearly responsible for throwing out the careful wording in favour of inflammatory language.1 The lead author, Tom Jefferson, when asked to comment on the meaning of the review, said: ‘There is just no evidence that masks make any difference.’3 And then, when asked about N95 masks specifically as opposed to lower quality surgical masks he said: ‘…makes no difference. None of it’.3 Finally, when Jefferson was asked what about the studies that initially persuaded policymakers to impose mask mandates? He said: ‘They were nonrandomized studies, flawed observational studies’.3

There is, in my view and in the view of many other occupational health scientists, a large body of evidence that masks, and more precisely respirators, are in fact effective at reducing infection by COVID-19.4–6 There is observational epidemiology, including a number of quite good case–control studies, and by now a wide variety of different prospective cohort studies in different settings using different methodologies. There is a large body of laboratory studies of the performance of masks for removing aerosols like the virus. There is also a great deal of very sophisticated experimental aerosol physics and fluid dynamics that has been done to understand how masks interfere with the spread of fine particles. There are even two randomised controlled trials—one of them very large, which actually did show a beneficial effect of mask wearing.7 I will not review this literature here, but for now assert that there is a body of evidence that masks and respirators are effective in reducing the risk of COVID-19.4–7

The key point for this editorial is that this body of evidence is quite diverse and consists of studies using widely divergent methods drawn from very different scientific disciplines. Contrast this body of evidence with the Cochrane approach to causal inference used in the recent review in which only the randomised controlled trials were considered. As the lead author stated: ‘everything else is flawed observational studies’.3 Thus if you follow Cochrane, use only the randomised controlled trials, follow a rigid systematic review and bias evaluation process, you end up with the conclusion that masks are not effective.

In occupational/environmental epidemiology, we consider a much wider range of types of evidence, including, for example, laboratory studies,5 and then synthesise these in a more comprehensive approach including triangulation, a method for comparing the strengths and weaknesses of different kinds of evidence.8 9

The reason to present this case study is because of what it tells us about occupational epidemiologists and what we bring, not just to this debate, but to the larger field of global public health. So, let us look at these two approaches and, in particular, at who chooses which approach. Why do some scientists pick one or the other approach? One of the key differences has to do with our scientific disciplines. The training and experience we gain through practice, and the tools that different scientific disciplines bring to their work.

Where does the Cochrane view come from? Many of the researchers who hold this view and use this approach to evidence synthesis were originally trained in Evidence Based Medicine. They are often clinicians or very close to clinical practice. They may now be using a formal systematic review framework like the one used by the WHO and known as Grading of Recommendations Assessment, Development and Evaluation (GRADE).10 GRADE has a built-in preference for randomised controlled trials, and uses standardised methods for quantifying risk of bias in each of the studies being synthesised, as well as methods for assigning quality of evidence scores for each piece of evidence.

Evidence frameworks like GRADE make sense in the clinical context, where the Hippocratic Oath: ‘first do no harm’ is a guiding principle. What that means here is that we should require a high degree of proof that an intervention or treatment is effective before we approve it. That is, we should be very confident that it will make the patient better, and failing this high burden of proof, we should wait.

Now, on the other side, where does the occupational epidemiologists’ perspective come from? In order to effectively protect workers’ health, we have received training in a wide range of applied science disciplines. In addition to epidemiology, we study occupational hygiene which involves the control of workplaces so we understand the solutions to the problems that we identify. We study quantitative exposure assessment, and toxicology so we can understand the biological basis of the diseases that we are studying. In our practice of occupational health research, we learn about the logistical, financial and political challenges that must be faced when designing and conducting epidemiological studies in the work environment. And because we learn to confront these challenges, we are comfortable using a wide range of types of evidence in making causal inferences.

It is helpful to notice that we do our science at the ‘point of production’; the place where economic productivity occurs.11 Production generates profit, but it also generates many of the hazardous exposures that make workers sick and which we study. Workers are often the most highly exposed because of their proximity to the production process. Production does not mean only manufacturing—services and healthcare, for example, are also productive activities with many potentially adverse impacts on workers’ health. There are other important consequences for occupational epidemiologists, working at the point of production. In our modern market economies, the point of production is very often private property; there is an owner who controls access, who may also limit the rights of workers to protect themselves and may also limit our ability as researchers to gather data. Because it is a private place, the point of production is often also a point of political conflict. For all of these reasons, it is hard to do good science there. As occupational epidemiologists, we have committed ourselves to that work, we have trained ourselves to do it, and as a consequence, we have a more expansive view of evidence than our clinical colleagues.

Occupational epidemiologists cannot place much emphasis on randomised controlled trials because it is usually not ethical to randomly assign people to exposures at work. We expect all of the evidence that we necessarily must use to be limited, to have biases and other weaknesses. Despite this, we learn how to extract from those pieces of evidence useful information. We do conduct systematic reviews but with much more heterogeneous evidence and findings that are rarely reducible to a single meta-point estimate and CI. Triangulation is an important approach to synthesis when evidence is diverse, and is a well-established research method in the social sciences for comparing the strengths and weaknesses of different kinds of evidence.8 9 The idea of triangulation is that we can get more reliable and comprehensive evidence syntheses by integrating results from different types of studies, each with different sources of bias. If the results of different approaches all point in the same direction, this strengthens our confidence in the conclusion. On the other hand, there can also be valuable information in the inconsistencies among different types of evidence.

A useful model of evidence synthesis using diverse types of evidence is the framework used in the International Agency for Research on Cancer (IARC) monographs.12 IARC has an explicit algorithm for combining evidence from three different evidence streams: cancer in humans, essentially epidemiology; cancer in animals from experimental laboratory studies and mechanistic evidence from a wide range of different types of experiments, usually performed in vitro.

Returning now to the contrasting approaches to causal inference, my view as an occupational epidemiologist is that it is simply unethical to consider only the randomised controlled trial evidence and reject as flawed all the other studies. If we accepted this view, we would be unable to protect workers from literally hundreds, if not thousands, of occupational hazards. The commitment that we have to protect health obligates us to consider all the potentially relevant evidence. You might call it a ‘luxury we cannot afford’ to exclude all but the highest quality evidence. As an occupational epidemiologist, I believe our job is to try to answer the question: ‘do we know enough to act as if X causes Y?’ and not simply: ‘does X cause Y?’13

The ‘act as if it’s a cause’ framing makes it clear that we cannot evaluate the weight of evidence without knowing why we want to know and what the options are for action, including costs and barriers for different strategies. Lower amounts of evidence will be needed to justify easy and low cost actions, while costly or difficult interventions will require stronger evidence. Thus in order to evaluate the weight of evidence, we need to know about prevention strategies—in the work environment, these are things like ventilation systems, or mechanical lifting devices for nurses, or replacing a toxic chemical with a safer one, or insisting on the use of respirators.

I think this view of causal inference is consistent with the Precautionary Principle.14 The Precautionary Principle was an idea that enjoyed some popularity about 20 years ago, and was incorporated into various environmental health practices and laws. It was brought back into the light again by the COVID-19 pandemic. Briefly, precaution has three key components that are relevant to this discussion of causal inference. First, taking action in the face of uncertainty, second, considering the full range of types and qualities of data in evaluating a risk, and third, shifting the burden of proof.

When do we know enough to act? That is the key question, and the intense debates over masks for COVID-19 demonstrate pretty clearly that the uncertainty that we need to consider is more than just statistical significance; the narrow statistical idea of error that we are trained by biostatisticians to understand. The scientific uncertainty that has blocked consensus on masks is much broader and more difficult. The literature on the Precautionary Principle identifies two additional dimensions of the uncertainty we face in deciding when to act: ambiguity and ignorance. Ambiguity gets in the way when, for example, there are disagreements over how to define a research question, even before we acknowledge uncertainties in the answers. Here again, I think it is important to notice the different perspectives and prejudices of different scientific disciplines. If you bring researchers from multiple scientific disciplines into the room to discuss a scientific question they may not even agree on how to frame the question let alone the weight of evidence that is available for answers. And then there is ignorance—the things that we do not know that we do not know.

Precaution also means making sure that you are considering the full range of types and qualities of data. One’s scientific training and experience inevitably affect how comfortable we are with all different kinds of data, and precaution means accepting types of evidence generated in very different scientific fields.

The third component of precaution is the burden of proof. In Evidence Based Medicine, we tend to require a greater weight of evidence to recommend an intervention than to do nothing. That makes sense in the clinical context, but in public health, the question of ‘how much evidence is enough?’ will depend on social and logistical factors as well as scientific ones, as the following example from the pandemic illustrates.

In the first weeks of 2020, WHO waited before declaring that a pandemic was underway. An independent review panel, appointed by the Director General of the WHO to review pandemic response, criticised the organisation for that delay.15 The panel said that in the first weeks of the pandemic, WHO should have ‘…assumed that in any outbreak caused by a new pathogen, sustained human-to-human transmission will occur unless the evidence specifically indicates otherwise’. Here one can see the shifting of the burden of proof. The Review Panel said—assume a novel virus is transmissible until you know otherwise, rather than what WHO did, which was to wait until they had ‘sufficient’ evidence to show it was transmissible.

In conclusion, I have argued that occupational epidemiologists are critical to global public health and we need more of them. This is not only about protecting workers’ health but also about the contributions that we make more broadly to the applications of uncertain science for the protection of the public’s health. Occupational/environmental epidemiology needs to be taught integrated with toxicology, exposure assessment, biostatistics, qualitative methods, occupational hygiene, sociology of work environment, the political aspects of the work environment and most importantly practical experience in real workplaces talking to real workers.

Occupational epidemiologists need knowledge of prevention and control, in addition to problem identification because the answer to the question ‘how much evidence is enough’ depends on what the potential solutions are.13 We also need sensitivity and respect for the complex social political and economic context in which occupational health programmes must be studied and solved.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.


I thank Dr. Margaret Quinn, Professor Emerita of occupational hygiene, the other faculty in the Department of Work Environment, University of Massachusetts Lowell, the many hundreds of students with whom I had the privilege to work over 30 years, and two anonymous reviewers.



  • Twitter @LowellCenterMA

  • Contributors DK is the sole author.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.