Article Text

Download PDFPDF
Investigating the effectiveness of occupational health interventions in the workplace
  1. David Rees1,2,
  2. James Ian Phillips3,4
  1. 1Division of Occupational Medicine, National Institute for Occupational Health, National Health Laboratory Service, Johannesburg, Gauteng, South Africa
  2. 2School of Public Health, Medical School, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
  3. 3Department of Pathology, NIOH, Johannesburg, South Africa
  4. 4Faculty of Health Sciences, Department of Biomedical Technology, University of Johannesburg, Johannesburg, South Africa
  1. Correspondence to Professor David Rees, National Institute for Occupational Health, PO Box 4788 Johannesburg, South Africa, 2000; david.rees{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Studies which investigate the effectiveness of occupational health interventions to reduce exposure or adverse health outcomes or both are important. They may show us what works and, just as importantly, what does not work, or at least give us pause for thought. Despite their value, these studies, particularly those using a randomised or cross-over design, are relatively rare in occupational medicine. Their scarcity is explained in part by quite substantial impediments to undertaking the research,1–3 ethical considerations not least among them. Withholding an intervention, especially one selected because it is thought to work and to prevent a serious disease, from a control group clearly has ethical implications. Also, they can be expensive, for example if equipment has to be bought and introduced into participating workplaces. Additionally, the impact may take years to be evident, requiring long-term resource allocation and the prospect of years of work before publishable answers on the effectiveness of the intervention are available.

This commentary uses three South African studies to demonstrate the value of well-designed intervention effectiveness research. These examples provide support for the greater use of these studies, despite their difficulties, even in a middle-income country with resource constraints.

The first is a paper by Baatjies et al4 published in this journal which shows that practicable interventions substantially reduced exposures in supermarket bakeries. Among its strengths are the group-randomised study design; the well-formulated interventions developed in consultation with professionals, bakery managers and employees; observation of compliance with interventions; the measurement of exposure to flour dust and allergens preintervention and postintervention; and the identification of the contribution to exposure reduction by individual interventions.

Of interest is the finding that the control group of bakeries (which continued as usual with its bakery activities) also showed a reduction in exposure, notably to flour dust: the geometric mean concentration fell from 0.74 to 0.44 mg/m3. The authors attribute the improvement to the transfer of managers from the intervention group of bakeries to control bakeries, who introduced some of the strategies they had found to work. Transfer of knowledge between intervention and control groups may be particularly difficult to prevent in studies within a well-connected group of workplaces, and has been suggested as one explanation for the lack of significant difference in wood dust reduction between intervention and control small woodworking businesses in the Minnesota Wood Dust Study.2 Although problematic methodologically, the ready uptake of strategies that are seen to work is encouraging from a hazard control perspective.

Baatjies et al4 article demonstrates some of the considerations in interpreting similar intervention effectiveness research. One is the impact on health outcomes. It is unclear whether the incidence of sensitisation to allergens will decrease to acceptable levels even with the substantial exposure reduction: the American Conference of Governmental Industrial Hygienists’ time-weighted average (TWA) Threshold Limit Value for flour dust5 is 0.5 mg/m3, and many of the bakery workers would be exposed above this concentration given the average postintervention exposures shown in tables 1 and 3 of Baatjies et al.4 Peak exposures are also of concern as they may lead to sensitisation even if TWA concentrations are below effective TWA occupational exposure limits. Peak exposures were not investigated by Baatjies et al4 but are to be expected in bakeries. Although the effect of peak exposures is unclear in this setting,6 the impact of the ‘substantial’ reduction achieved will only become clearer once further studies to investigate disease burden in these bakers have been completed. Sustainability of hazard control over time is another consideration. Unless interventions are adopted by the study workplaces without dependence on research resources and motivation, effectiveness is likely to decline on completion of the project.

The second study, the Thibela TB Study,7 is probably the largest workplace-based intervention study conducted in South Africa. Gold miners have very high tuberculosis rates, being in the order of 3000/100 000 miners in the late 2000s,7 and the study aimed to reduce tuberculosis transmission by identifying and treating active disease (the entire workforce was screened for active tuberculosis), and by providing at least 6 months of community-wide isoniazid preventive therapy (IPT) for latent tuberculosis. Community-wide in this context meant offering IPT to all the gold miners in the intervention group, provided they did not have active tuberculosis, without prior determination of latent tuberculosis status. The design was a cluster-randomised trial, and 78 744 miners within 15 clusters constituted the study population including eight intervention clusters with 40 981 miners. Cluster-level incidence of tuberculosis was measured during the 12 months after the intervention ended. Unfortunately, the intervention did not reduce the incidence of tuberculosis (rate ratio=1; 95% CI 0.75 to 1.34). The authors concluded that “Mass screening and treatment of latent tuberculosis had no significant effect on tuberculosis control in South African gold mines…” The Thibela study findings are a salutary demonstration of the importance of evaluating an occupational intervention that was expected to work, but did not.

The third example concerned the evaluation of an HIV/AIDS peer education programme in a South African company employing about 18 761 people.8 Peer-education, that is education by designated and trained co-workers, is a widespread practice in workplaces in the region to provide education on HIV/AIDS and to support risk-reducing behavioural change in groups of workers. Formal evaluation of the effectiveness of these programmes is scant. The researchers, Sloan and Myers, evaluated the impact of peer educators on factors such as HIV/AIDS knowledge, attitudes towards those infected with HIV and condom use. After controlling for potential confounders and effect-modifying variables, the authors found that very little of the variance in any of the outcome measures was explained by peer education. Sloan and Myers concluded that “The HIV peer-education programme was found to be ineffective and contributed little to improved knowledge…and may have involved opportunity costs…” The study setting and operational aspects of the programme are factors to consider in generalising the findings to other workplaces: other studies have found more positive outcomes,9 but the Sloan and Myers study certainly gives pause for thought.

In conclusion, much is done in workplaces to control hazards and to improve health, but far too little of it is measured for effectiveness. We should strive to gain a better understanding of what works and what does not if resources are to be used to their best advantage.



  • Competing interests The contribution of JIP is based on research supported by the National Research Foundation.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles