Article Text
Statistics from Altmetric.com
Rushton's recent article1 on the reporting of occupational and environmental research raises several useful points that all researchers would do well to remember when writing up epidemiological findings for publication. Without expressly intending to do so, however, the article also emphasises the hazards of establishing formal criteria or checklists for the evaluation of scientific work. Good epidemiological practices certainly exist, but one of the pitfalls inherent in attempts to codify them is that, by their nature, lists of the features of “good” research tend to impose a “one size fits all” standard, which—like clothing of the same description—fits nothing particularly well.
The prospect of developing formal guidelines for reporting analyses based on multivariable models illustrates the difficulties. Science involves many kinds of activities, but the significant advances come about through the creative application of human intellect, rather than by rote repetition of the familiar. Like other aspects of science, epidemiological data analysis blends attention to factual detail with creativity, intuition, judgement, and even aesthetics. From the initial choice of model form to the final specification of covariates and interaction terms, there may be many reasonable ways to model a given data set. Researchers should be at liberty to analyze their data according to their individual scientific insights. In subsequent evaluations of methods and results, reviewers likewise should be encouraged to apply their scientific judgement, rather than following a recipe.
The opportunity cost involved in complying with guidelines for good practice may also be considerable, as Rushton suggests.1Between the growing fear of litigation and mounting demands for accountability, especially in the United States, epidemiologists may soon spend more time documenting adherence to protocol than doing science.
My particular fear, however, is that guidelines will be used to assail sound research on the grounds that it fails to comply with supposed standards of good science. The misuse of Hill's ideas about causality illustrates the danger. Hill intended his suggestions as an aid to researchers, not as evaluative standards for critics; he wrote: “I do not believe...that we can usefully lay down some hard-and-fact rules of evidence that must be obeyed before we accept cause and effect. None of my nine viewpoints ... can be required as a sine qua non. What they can do, with greater or less strength, is help us to make up our minds on the fundamental question.”2 Yet Hill's ideas are often presented as criteria that must be fulfilled for a study's evidence to be accepted.3 The involvement of such obviously self interested groups as the Chemical Manufacturers Association in promoting “good epidemiological practices” makes the potential misuse of guidelines to suppress good research seem all too likely.
I do not mean to suggest that all epidemiological research should be published or accepted at face value, far from it. There will always be a need for review to ensure the quality of published work and to protect the public from policies based on unsound science. I am convinced, however, that peer review coupled with the opportunity for criticism and debate in open publications provide the best pathway to this goal. By contrast with standardised criteria, these processes allow multiple, independent readers' perspectives on the methodological quality, and the substantive importance of research to be heard. As a result, they reduce the chances that unconventional but valuable views will be suppressed or that an interested group could gain control over the process for their own purposes.
References
L Rushton replies
Loomis draws attention to the potential dangers of the rigid use of checklists and guidelines to judge occupational and environmental research. I agree with these sentiments, in particular the concerns about the increasing number of papers which use compliance with these guidelines as a justification for conclusions on causality. There is, however, one rapidly expanding area of research which would benefit from the development of minimum standards for presentation of results. This is the field of epidemiological meta-analysis, in which data are generally abstracted from published papers. Difficulties can arise in deriving a common set of definitions for variables. For example, in a meta-analysis of use of oral contraceptives and risk of breast cancer,1-1 42 different categories of duration of use of oral contraceptive were published in the 24 papers analysed for this variable. Debate within the scientific community is needed to decide categorisations which would be most useful. Editors could then encourage authors either to use these in their papers or at least be prepared to make them available on request.
References
- 1-1.↵