Dr Loomis draws attention to the potential dangers of the rigid use
of checklists and guidelines to judge occupational and environmental
research. I agree with these sentiments, in particular the concerns about
the increasing number of papers that use compliance with these guidelines
as a justification for conclusions regarding causality. There is, however,
one rapidly expanding area of research that...
Dr Loomis draws attention to the potential dangers of the rigid use
of checklists and guidelines to judge occupational and environmental
research. I agree with these sentiments, in particular the concerns about
the increasing number of papers that use compliance with these guidelines
as a justification for conclusions regarding causality. There is, however,
one rapidly expanding area of research that would benefit from the
development of minimum standards for presentation of results. This is the
field of epidemiological meta-analysis, in which data are generally
abstracted from published papers. Difficulties can arise in deriving a
common set of definitions for variables. For example, in a meta-analysis
of oral contraceptive use and breast cancer risk,[1] 42 different
categorisations of duration of oral contraceptive use were published in
the 24 papers analysed for this variable. Debate within the scientific
community is needed to decide categorisations that are most useful.
Editors could then encourage authors either to use these in their papers
or at least be prepared to make them available on request.
Reference
1. Rushton L, Jones DR. Oral contraceptive use and breast cancer
risk: a meta-analysis of variations with age at diagnosis, parity and
total duration of oral contraceptive use. Br J Obs Gyn 1992;99:239-246.
Editor,
Rushton's recent article on the reporting of occupational and environmental research raises a number of useful points that all researchers would do well to remember when writing up epidemiological findings for publication. Without expressly intending to do so, however, the article also emphasizes the hazards of establishing formal criteria or
checklists for the evaluation of scientific work. Good epi...
Editor,
Rushton's recent article on the reporting of occupational and environmental research raises a number of useful points that all researchers would do well to remember when writing up epidemiological findings for publication. Without expressly intending to do so, however, the article also emphasizes the hazards of establishing formal criteria or
checklists for the evaluation of scientific work. Good epidemiological practices certainly exist, but one of the pitfalls inherent in attempts to codify them is that, by their nature, lists of the features of "good"
research tend to impose a "one size fits all" standard, which - like clothing of the same description - fits nothing particularly well.
The prospect of developing formal guidelines for reporting analyses based on multivariable models illustrates the difficulties. Science involves many kinds of activities, but the significant advances come about through the creative application of human intellect, rather than by rote
repetition of the familiar. Like other aspects of science, epidemiological data analysis blends attention to factual detail with creativity, intuition, judgement and even aesthetics. From the initial choice of model
form to the final specification of covariates and interaction terms, there may be many reasonable ways to model a given data set. Researchers should
be at liberty to analyze their data according to their individual scientific insights. In subsequent evaluations of methods and results, reviewers likewise should be encouraged to apply their scientific judgement, rather than following a recipe.
The opportunity cost involved in demonstrating compliance with guidelines for good practice may also be considerable, as Rushton suggests. Between the growing fear of litigation and mounting demands for accountability, especially in the United States, epidemiologists may
soon spend more time documenting adherence to protocol than doing science.
My particular fear, however, is that guidelines will be used to assail sound research on the grounds that it fails to comply with supposed standards of good science. The misuse of Bradford Hill's ideas about causality illustrates the danger. Hill intended his suggestions as an aid
to researchers, not as evaluative standards for critics; he wrote: "I do not believe...that we can usefully lay down some hard-and-fact rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints ... can be required as a sine qua non. What they can do, with greater or less strength, is help us to make up our minds on the fundamental question." [1] Yet, Hill's ideas are frequently presented as criteria that must be fulfilled for a study's evidence to be accepted.[2]
The involvement of such obviously self-interested groups as the Chemical Manufacturers Association in promoting "good epidemiological practices" makes the potential misuse of guidelines to suppress good research seem all too likely.
I do not mean to suggest that all epidemiological research should be published or accepted at face value, far from it. There will always be a need for review to ensure the quality of published work and to protect the public from policies based on unsound science. I am convinced, however,
that peer review coupled with the opportunity for criticism and debate in the open literature provide the best pathway to this goal. In contrast with standardized criteria, these processes allow multiple, independent readers' perspectives concerning the methodological quality, and the substantive importance of research to be heard. As a result, they reduce
the chances that unconventional but valuable views will be suppressed or that an interested group could gain control over the process for their own purposes.
References
1. Hill AB. The environment and disease: association or causation? Proc R Soc Med 1965;58:295-300.
2. Gamble JF. PM2.5 and mortality in long-term prospective cohort studies: cause-effect or statistical association? Environ Health Perspect 1998;106:535-549.
Dear Editor
Dr Loomis draws attention to the potential dangers of the rigid use of checklists and guidelines to judge occupational and environmental research. I agree with these sentiments, in particular the concerns about the increasing number of papers that use compliance with these guidelines as a justification for conclusions regarding causality. There is, however, one rapidly expanding area of research that...
Editor,
Rushton's recent article on the reporting of occupational and environmental research raises a number of useful points that all researchers would do well to remember when writing up epidemiological findings for publication. Without expressly intending to do so, however, the article also emphasizes the hazards of establishing formal criteria or checklists for the evaluation of scientific work. Good epi...
Pages