Article Text

Download PDFPDF

IARC-NCI workshop on an epidemiological toolkit to assess biases in human cancer studies for hazard identification: beyond the algorithm
  1. Mary K Schubauer-Berigan1,
  2. David B Richardson2,
  3. Matthew P Fox3,
  4. Lin Fritschi4,
  5. Irina Guseva Canu5,
  6. Neil Pearce6,
  7. Leslie Stayner1,
  8. Amy Berrington de Gonzalez7,8
  1. 1Evidence Synthesis and Classification Branch, International Agency for Research on Cancer, Lyon, France
  2. 2Department of Environmental and Occupational Health, Program in Public Health, Susan and Henry Samueli College of Health Sciences, University of California Irvine, Irvine, California, USA
  3. 3Department of Epidemiology and Department of Global Health, Boston University, Boston, Massachusetts, USA
  4. 4School of Public Health, Curtin University, Perth, Western Australia, Australia
  5. 5Unisanté, University of Lausanne, Lausanne, Switzerland
  6. 6Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
  7. 7Division of Cancer Genetics and Epidemiology, The Institute of Cancer Research, London, UK
  8. 8Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, Maryland, USA
  1. Correspondence to Dr Mary K Schubauer-Berigan, Evidence Synthesis and Classification Branch, International Agency for Research on Cancer, Lyon, France; BeriganM{at}iarc.fr

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The Monographs programme of the International Agency for Research on Cancer (IARC) has, for more than 50 years, convened expert Working Groups to evaluate evidence regarding preventable causes of human cancer. Working Groups have evaluated more than a thousand agents, including chemicals, physical and biological agents, pharmaceuticals and nutritional factors, individual behaviours, complex mixtures and occupational exposure circumstances. Each agent was selected for evaluation based on there being some evidence of human exposure and the suspicion of carcinogenicity.1 Evidence considered in Monographs evaluations comprises studies of cancer in humans (usually observational cohort and case–control studies), experimental cancer bioassays and mechanistic studies. Findings in occupational cancer studies have crucially informed Monographs evaluations since the first volume.2

Monographs Working Groups have always recognised that cancer epidemiology studies are subject to potential biases that must be carefully considered before interpreting associations as causal. The Preamble to the IARC Monographs guides the Working Group in conducting its carcinogenicity reviews.1 Since 1983,3 the Preamble has used the phrase ‘chance, bias, and confounding’ to encapsulate challenges in interpreting human cancer evidence. Working Groups have weighed these factors when rigorously evaluating whether there is “sufficient” or “limited” evidence regarding an agent’s ability to cause cancer in humans. For “sufficient” evidence, a causal interpretation is reached, in that positive association is seen in the body of evidence and chance, bias, and confounding can be ruled out, with reasonable confidence, as an explanation for the findings. For “limited” evidence, a causal interpretation is credible, but chance, bias, and/or confounding cannot be ruled out with reasonable confidence. A conclusion of “inadequate” evidence or of “evidence suggesting lack of carcinogenicity” may also be reached. Expert Working Groups explicitly describe how they were able—or not—to rule out these sources of error with reasonable confidence, as seen in recent evaluations of night shift work,4 opium consumption5 and aniline.6

Many approaches to evaluating the direction and magnitude of biases in observational epidemiology have been described in the scientific literature—some new,7 and some existing for decades.8 To make these methodological developments more easily accessible to those conducting cancer hazard identification, the Monographs programme and the National Cancer Institute’s Division of Cancer Epidemiology and Genetics convened a scientific workshop of global experts in cancer epidemiology and in statistical and epidemiological methodology. The primary purpose of the workshop was to discuss and summarise statistical and epidemiological developments relevant to the assessment of bias, including its direction and magnitude, in observational epidemiology studies. Our goal for a forthcoming scientific publication based on the output of the workshop is to provide a toolkit of bias assessment methods, presented in such a way that they can be used during a review process by epidemiologists and statisticians (including those without extensive statistical or epidemiological training, respectively), and by primary investigators in their own work. We will also illustrate the application of these methods to cancer hazard identification, in which the main goal is to assess the strength of evidence for or against a causal interpretation, as distinct from a full risk assessment in which the main interest is to estimate a specific numerical causal effect per unit of exposure.

In October 2022, 37 scientists from 12 countries met in Lyon, France to discuss bias assessment methods and their potential application in evidence synthesis for cancer hazard identification. Using as examples, four agents recently evaluated by the IARC Monographs programme (radiofrequency electromagnetic field radiation, night shift work, red meat consumption and opium consumption), workshop attendees demonstrated how these methods can be applied to support cancer hazard identification. Workshop participants focused on methods that can be employed using the information available in published reports typical of those available during a Monographs review process. Topics were organised around themes starting with graphical approaches (eg, directed acyclic graphs) to set a framework for the major bias considerations for a given exposure-and-outcome pair. This led to advice on approaches and methods to evaluate impacts of unmeasured or residual confounding, information bias (ie, biases due to mismeasurement or misclassification of exposure or outcome), and selection bias, for each individual study. Workshop attendees discussed possible approaches to incorporate assessments of the direction and magnitude of these biases into overall evidence synthesis. Finally, attendees outlined approaches that researchers could adopt to assess the impacts of each source of bias in their own studies.

The workshop materials will result in the publication of a new volume in the IARC Scientific Publications series Statistical Methods in Cancer Research, which includes the landmark publications on cohort and case–control studies authored by Breslow and Day.9 10 The new Scientific Publication, expected in early 2024, will summarise methods for bias assessment to support cancer hazard identification, illustrate these methods with examples and discuss how these methods could also be incorporated into future published studies to better inform cancer hazard and risk assessments.

We think the above approaches for explicitly and quantitatively considering the impacts of bias in the cohort and case–control studies that largely make up the evidence base for cancer hazard identification are superior to algorithmic approaches11 12 that emphasise ‘risk’ or potential for bias rather than the direction and magnitude of biases. Such algorithmic approaches, focused on ‘scoring’ of individual studies, often result in exclusion of many informative studies from the final assessment. There is growing awareness of the useful information contributed in evidence syntheses by studies with different biases, particularly where studies are likely to have biases in different directions, and by studies in which potential sources of bias have limited impact on the results. This information should be leveraged via triangulation, sensitivity analyses, stratified meta-analyses and other methods that consider and contrast evidence across studies rather than simply excluding individual studies that do not score well using preassigned algorithms.12 By providing tools for experts to use in evaluating the likely direction and magnitude of biases in published literature, the transparency and robustness of causal conclusions can be enhanced. Adoption of such tools by researchers in discussing their own research findings should also improve the literature on carcinogenic effects of preventable exposures. Furthermore, the approaches outlined in the forthcoming volume should have wide applicability to outcomes beyond cancer, in particular, to other chronic diseases and to biomarker studies of precancerous endpoints.

Ethics statements

Patient consent for publication

Acknowledgments

We thank the following participants in the IARC-NCI Workshop: Onyebuchi Arah (unable to attend), Laura Beane Freeman, Terry Boyle, Sadie Costello, Nathan DeBono, Veronika Deffner, Pietro Ferrari, Laurence S Freedman (attended remotely), Sander Greenland (unable to attend), Jay Kaufman, Alex Keil, Kaitlin Kelly-Reif, Manolis Kogevinas, Hans Kromhout, Deborah Lawlor, Sarah Lewis, Ruth Lunn, Brigid Lynch, Richard MacLehose, Ellen Aagaard Nøhr, Marie-Elise Parent, Lorenzo Richiardi, Rodolfo Saracci, David Savitz, Pamela Shaw, Kyle Steenland, Sonja Swanson, Eric Tchetgen Tchetgen, Vivian Viallon, Roland Wedekind, Scott Weichenthal.

References

Footnotes

  • Twitter @MaryBerigan

  • Contributors MKS-B, DBR and ABdG conceived of, planned and co-chaired the workshop and led the development of themes related to use of epidemiological data in cancer hazard identification (MKS-B), confounding (DBR) and incorporation of bias appraisal in evidence synthesis (ABdG). They also drafted the manuscript and are responsible for the overall content as guarantors. MPF, LF, IGC, NP and LS contributed to the planning of the workshop, led the development of themes related to graphical approaches to bias assessment (MPF), selection bias (NP), information bias (LS and NP), and approaches for researchers to incorporate bias appraisal into their primary research (LF and IGC), and contributed to the writing of the manuscript. All authors have read and approved of the final manuscript.

  • Funding This work was supported by the International Agency for Research on Cancer (IARC) and by the National Cancer Institute’s Intramural Research Program, Division of Cancer Epidemiology and Genetics.

  • Disclaimer Where authors are identified as personnel of the International Agency for Research on Cancer / WHO, the authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the International Agency for Research on Cancer / WHO.

  • Competing interests LS has provided expert opinion on behalf of plaintiffs in relation to litigation in cases involving exposure to ethylene oxide and asbestos.

  • Provenance and peer review Commissioned; internally peer reviewed.