Article Text


Neurobehavioral performance among agricultural workers and pesticide applicators: a meta-analytic study
  1. A A Ismail1,2,
  2. T E Bodner3,
  3. D S Rohlman4
  1. 1Department of Family and Community Medicine, Faculty of Medicine, Jazan University, Gizan, Saudi Arabia
  2. 2Public Health and Community Medicine Department, Faculty of Medicine, Menoufia University, Shebin Elkom, Egypt
  3. 3Department of Psychology, Cramer Hall, Portland State University, Portland, Oregon, USA
  4. 4Center for Research on Occupational and Environmental Toxicology, Oregon Health and Sciences University, Portland, Oregon, USA
  1. Correspondence to Dr Ahmed Ismail, Family and Community Medicine Department, Faculty of Medicine, Jazan University, Gizan 114, Saudi Arabia; aa-ismail{at}


Chronic low level exposure of agricultural workers and applicators to pesticides has been found to be associated with different degrees of decrement in cognitive and psychomotor functions. The goal of this study was to use meta-analysis to (1) identify and quantify neurobehavioral deficits among agricultural workers and pesticide applicators, and (2) analyse the potential confounders or moderators of these neurobehavioral deficits. Seventeen studies, reporting on 21 independent cohort groups, were included in the meta-analysis. These studies involved 16 neuropsychological tests providing 23 different performance measures that constitute the neurobehavioral constructs. All tests and measures of the neurobehavioral functions of attention, visuomotor integration, verbal abstraction and perception constructs showed significant decrements for exposed participants. One out of three tests of memory, two of five tests of sustained attention, and four of eight tests of motor speed constructs also showed significant decrements. Nine out of these 15 effect size distributions demonstrated significant heterogeneity across cohorts. A search for cohort-level variables (eg, agricultural workers vs applicators, duration of exposure, age and percentage of male participants) to explain this heterogeneity was largely unsuccessful. However, for one test, Block Design, the duration of exposure was positively associated with performance decrements. Furthermore, it was also found that performance decrements on this test were smaller for older participants. Increasing the number of studies and using more consistent methodologies in field studies are needed.

  • Neurobehavioral
  • pesticides
  • meta-analysis
  • neurobehavioural effects
  • epidemiology
  • occupational health practice
  • public health
  • exposure monitoring

Statistics from

What this paper adds

  • Studies examining chronic low-level pesticide exposure have found deficits associated with cognitive and psychomotor performance; however, these findings are inconsistent across studies.

  • This meta-analysis shows that all tests and measures of attention, visuomotor integration, verbal abstraction and perception showed significant decrements in neurobehavioral performance in the pesticide exposed group compared to the control group.

  • As the duration of exposure increases, the negative effects of pesticide exposure on Block Design scores increase, with decrements being higher among younger participants.

  • Given such significant neurobehavioral deficits among farmworkers and pesticide applicators, effective methods of minimising pesticide exposure among these workers should be implemented.


Neurobehavioral effects from exposure to organophosphate pesticides (OPs) in agricultural workers and pesticide applicators has been studied for several decades, and impaired health or deficits in neurobehavioral performance have been demonstrated.1 Studies examining high-dose acute poisoning2 3 and studies examining chronic exposure to lower levels have both reported deficits associated with cognitive and psychomotor performance.4–14 However, these findings are inconsistent across studies.10 15 Although similar measures and testing instruments were used in different studies, comparable results were not always found (table 1). Several studies have reported poor performance on measures that evaluate both cognitive and psychomotor functions,2 8 9 16 whereas others have demonstrated performance decrements only on measures that evaluate cognitive but not psychomotor functions,5 while still others have reported decrements in measures of psychomotor but not cognitive functions.6 20 Reports of no neurobehavioral deficits associated with pesticide exposure are also available.21

Table 1

Summary of findings from four neurobehavioral tests: Symbol Digit, Digit Symbol, simple Reaction Time and Finger Tapping

Several factors may explain the inconsistencies in neurobehavioral outcomes reported in these studies. First, variation in the measures and test instruments across the studies is an important factor. Methods include traditional paper-and-pencil and non-computerised tests,7 8 16 18 19 in addition to technology driven computerised test batteries such as the Neurobehavioral Evaluation System (NES)7 21 22 and the Behavioral Assessment and Research System (BARS).9 11 12 17 20 Several studies also applied a combination of both assessment methods, for example, the WHO Neurobehavioral Core Test Battery (NCTB).4 5 The design of the study is a second factor that may contribute to the inconsistencies across studies. A cross-sectional design was used in the majority of studies,1 4 5 7–13 16–18 21 22 whereas only one study reported the use of a prospective design.6 The methodologies of the studies also differ in terms of sensitivity (precision) and accuracy (risk of confounding).23 Finally, the exposures among the cohorts varied across studies. Several studies have examined agricultural workers exposed as a result of working in areas where pesticides are applied,1 5 7–9 11–14 17 other studies have evaluated pesticide applicators who are exposed while mixing or applying pesticides,16 18 21 22 and one study examined engineers and mechanics exposed as a result of supervision during the application of pesticides or maintenance of the application equipment.7

The goals of this review are to examine and quantify the effect of chronic, low-level pesticide exposure in agricultural workers on specific functions of neurobehavioral performance (eg, memory, attention, motor speed) through meta-analysis. In addition, the impact of potential confounders or modifiers of these neurobehavioral effects (eg, assessment methods, demographics, job category) will be examined.


Literature search

Studies examining neurobehavioral health effects resulting from occupational pesticide exposure among agricultural workers, pesticide applicators and other related jobs were identified through a comprehensive literature search. A Medline/PubMed search (1966—December 2010) was conducted to obtain relevant journal articles using the following keywords: neurobehavioral, neuropsychological, memory, visual memory, recall and recognition memory, attention, sustained attention, divided attention, concentration, vigilance, visuomemory, visuospatial, cognitive, verbal, psychomotor, problem solving, response speed, coordination, hand-eye coordination, coding, complex functioning, motivation, learning, dexterity, perception, expressive language WAIS-R, WISC-III, adolescent(s), child, children, adult, applicators, sprayers, pest control, workers, farmworkers, farmers, applying, agriculture, working, pesticides, organophosphates (OPs), insecticide, cotton fields, AChE inhibiting insecticide, outcomes, evaluation, effects, impact, assessment. The reference lists of the articles were also reviewed for additional relevant articles.

Inclusion and exclusion criteria

The literature search yielded a total 23 studies (1977–2008).1 4–9 11–14 16–19 21 22 24–29 Final selection of the studies to be included in the meta-analysis was based on three criteria. First, there had to be opportunity for exposure to OPs as a result of working in agriculture or applying pesticides. We specifically focused on OP exposure because this is the most common exposure reported in studies examining neurobehavioral performance among agricultural workers and pesticide applicators and we aimed to narrow our focus to a class of pesticides that have similar mechanisms of action in the nervous system. Second, the study needed to address chronic low-level exposure without previous acute symptoms or poisoning. Finally, the study needed to include common neurobehavioral tests or measures of cognitive or motor functions that had been used in at least three studies. We also considered the availability of sufficient quantitative information reported in the article (eg, means, raw or adjusted, as well as standard deviations) so that effect sizes could be calculated.

Six studies were excluded from the meta-analysis for several reasons. In one study neuropsychological results were reported as a collective score of whole cognitive dysfunction and not as separate individual test results.24 Two studies utilised a within-subjects design that is different from our approach in this paper (comparison of between-subject cohorts).26 28 Three other studies were excluded as they presented insufficient information to calculate the effect sizes.25 27 29


A total of 17 studies remained that met the inclusion criteria. One of the authors coded the studies on the dimensions discussed below, which was subsequently reviewed by the other two authors.

Several studies reported data from more than one cohort. Within a study, data from different cohorts could be presented separately due to the use of different test batteries or methods, different demographic characteristics of the cohorts or different job categories. For example, the study of Abdel Rasoul et al16 presented data from two cohorts, one with children below 15 years of age and the other with children 15 years of age and above. The cohorts were presented separately because age appropriate versions of the test battery were used for the different cohorts. Daniel et al6 also formed two cohorts based on the languages used in the test battery (Spanish or English). Job category (applicators and agricultural workers) and age (adolescents and adults) were also used to separate cohorts in the studies by Cole et al5 and Rohlman et al,12 respectively. Although these studies may have aggregated the cohorts within their analyses, we used the raw data to separate them into distinct cohorts for the meta-analysis.

In the meta-analysis the cohorts were coded on the basis of the age of the participants (ie, adults over 18 years of age and adolescents 18 years of age or younger). Coding also took job category into account (ie, applicators included pesticide mixers and formulators and agricultural workers included both farmworkers and greenhouse workers). Cohorts were also coded for the methods used to assess neurobehavioral performance. The studies used either traditional non-computerised neuropsychological tests or batteries (eg, the NCTB, Halstead-Reitan Battery, Wechsler Adult Intelligence Scale and Wechsler Intelligence Scale for Children) or computerised test batteries (eg, the BARS and NES).

Tests of neurobehavioral performance were categorised into seven constructs according to standard neuropsychological classification30: memory, attention, sustained attention, motor speed, visuomotor integration, verbal ability and perception. If a specific test was used by three or more cohorts, it was included in our current analysis. The majority of the tests have only one primary outcome measure (eg, Match to Sample, Serial Digit Learning, Benton Visual Retention Test (BVRT), Pursuit Aiming II, Symbol Digit latency, Santa Ana Pegboard preferred hand, Reaction Time latency, Digit Symbol, Similarities, and Block Design). However, several tests have more than one outcome measure (eg, Digit Span: forward and backward scores; Continuous Performance: number of trials and hit latency; Selective Attention: number of trials, inter-stimulus interval and latency; Trail Making: A and B). See online supplementary appendix A for the definitions of these tests. Effect sizes from each of these outcome measures were included in the analysis.

Meta-analytic procedures

Effect sizes were defined and calculated as the standardised mean difference, which is obtained by dividing the difference between means (ie, on a given test across the exposed and non-exposed groups) by the within-group standard deviation of scores for that test. Since higher scores on a measure could either reflect better or worse performance (eg, higher latency scores indicate lower/worse performance), we adopted a coding convention where negative effect sizes for the various performance measures indicate poorer performance in the exposed group compared to the controls. Effect sizes for every measure were calculated for the cohorts within studies providing data for that neurobehavioral measure.

The meta-analytic procedure followed model-based meta-analytic techniques for testing effect size centrality, homogeneity and moderation.31 Fixed effects model analyses were conducted. For significantly heterogeneous effect size distributions, a qualitative search for confounders or moderators was conducted in order to explain the divergence. In this search, all available confounders reported in the included articles were examined for homogeneity inside each category of that confounder. Tests of effect size centrality were conducted using Z tests to evaluate the null hypothesis that the population mean effect size was equal to zero.32 Tests of effect size homogeneity were conducted using the Q test to evaluate the null hypothesis that the variance in population effect sizes was equal to zero (ie, the effect sizes assumption).32 Using the method of maximum likelihood, the meta-analytic analogue to an analysis of variance test, was used to examine the categorical moderator variables (eg, job category, tool used for neurobehavioral performance evaluation) evaluating the null hypothesis that the population mean effect sizes were equal across all levels of the categorical moderator variables. The meta-analytic analogue to regression analysis was used to examine the continuous moderator variables (eg, duration of exposure, percentage of males in the exposed group) evaluating the null hypothesis that the population mean effect sizes were the same despite change in the level of the predictors.32

Publication bias was examined by calculating the fail safe N, defined as the number of studies with non-significant results (p>0.05) that would bring a significant pooled analysis to non-significant levels. The calculations were based on Rosenthal's formula.33 Consistent with modern meta-analytic conventions, the bias in the sample estimator of a population effect size was minimised using the transformation suggested by Hedges and Olkin.31 The quality of the included studies was rated using a modified scale from the Newcastle-Ottawa scale for assessing the quality of observational studies, and the adopted scale of Jones et al34 (see online supplementary appendix B). By this scale, a study is judged on three broad perspectives: the selection of the study groups, the comparability of the groups and the ascertainment of either the exposure or outcome of interest for case–control or cohort studies, respectively.


Twenty-one independent cohorts (table 2, second column) were identified from the 17 studies that met the inclusion criteria as four studies reported two separate cohorts (Abdel Rasoul et al,16 Cole et al,5 Daniel et al6 and Rohlman et al12). Table 2 also describes the sample size, age of both exposed and control groups, duration of exposure, the methods used to assess neurobehavioral performance, and finally the category of the exposure separated into two main groups: applicators and farmworkers. Data from 16 neuropsychological tests were included, providing 23 measures of the different aspects of the neurobehavioral constructs (table 3).

Table 2

Summary of the 17 studies included in the meta-analysis

Table 3

Fixed effect meta-analysis results of the neurobehavioral performance measures

The results of the meta-analysis are presented in table 3 and figure 1. As it was planned a priori to examine the effect of different potential moderators on the effect sizes, the fixed effect size analysis model was used to obtain the overall mean effect sizes for each neurobehavioral performance measure. The table shows that 21 of the 23 measures indicated performance decrements in the exposed group. Significant decrement of neurobehavioral performance was found for 15 measures. The K0 column shows the number of unpublished studies reporting null results needed to reduce the cumulative effect across studies to the point of non-significance. K0 ranges from 6 to 46.

Figure 1

Fixed effect sizes of the different neurobehavioral measures.

All measures with a significant mean effect size demonstrated decrements in performance in the exposed participants compared to the control participants. Among the three tests that assessed memory function, only one test, BVRT, showed a significant mean effect size (p<0.001). Both measures of the Digit Span Test (DST) (forward and backward) which assessed attention, demonstrated significant mean effect sizes (p<0.001). Two of the five measures assessing sustained attention showed significant overall mean effect sizes, the number of trials and inter-stimulus interval from the Selective Attention Test (p<0.05). Also significant overall mean effect sizes were found for four of the eight measures of motor speed and coordination (Santa Ana preferred hand, Pursuit Aiming correct score, Finger Tapping non-preferred hand, and Reaction Time latency) (p<0.05). All measures of visual motor processing had significant effect sizes (Symbol Digit Test latency, the non-computerised Digit Symbol score, and both Trail Making A and B) (p<0.001). Similarities, a measure of verbal abilities, and Block Design, a measure of perception, also had significant overall mean effect sizes (p<0.001).

For the nine significant neurobehavioral performance measures where there was a heterogeneous distribution of effect sizes (homogeneity p<0.05; table 3, last column), a qualitative search for confounders was conducted in order to explain the divergence. Exposure category (agricultural workers or applicators) was first examined to determine its impact on the effect size distribution using the maximum likelihood method, an analogue to an analysis of variance test. Results show significantly larger mean deficits for agricultural workers than applicators on the Digit Symbol and Trail Making A measures (p<0.05). Beyond these tasks, no other significant differences between agricultural workers and applicators were found, suggesting some generality of the magnitude of the neurobehavioral decrements due to pesticide exposure across these two groups. After controlling for being an agricultural worker or a pesticide applicator, most of the effect size distributions remain heterogeneous (homogeneity p<0.05; table 4).

Table 4

Effect sizes according to the exposure category of the exposed group in measures having significant heterogeneous effect sizes

Type of assessment method (computerised vs non-computerised) was also included as a categorical variable to differentiate the cohorts for explaining the effect size heterogeneity. The assessment methods were categorised into two categories, computerised batteries that included the NES6 21 22 or the BARS,8 11–13 17 and non-computerised test batteries that included the NCTB,4 5 14 the Halstead-Reitan Battery,1 the Wechsler Adult Intelligence Scale7 and the Wechsler Intelligence Scale for Children16 or other individual tests.8 19 26 Except for the DST which can be administered either as part of a computerised battery or as a non-computerised test, all of the studies fell into only one category. The mean effects size of computerised Digit Span forward was −0.48 (Z=−6.40, p<0.001, Q homogeneity test p=0.03) and the mean effect size of the non-computerised version was −0.36 (Z=−4.28, p<0.001, Q homogeneity test p=0.01). However, the difference between these two mean effect sizes was not statistically significant (QBetween-groups=1.10, p=0.3).

Weighted regression analysis was applied to examine the impact of various continuous independent variables (exposure duration, age and percentage of males) on the conditional mean effect sizes of the significantly affected neurobehavioral performance outcomes. Using the maximum likelihood method, it was found that Block Design and Finger Tapping non-preferred hand measures were the only tests that were significantly associated with the above-mentioned continuous variables. As the duration of exposure increased, the expected effect size of the Block Design test decreased (B=−0.27, p=0.004), indicating greater neurobehavioral decrements with increased exposure. The results also demonstrated that the performance decrements due to exposure were smaller in older than younger participants (B=0.14, p=0.005), indicating that pesticide exposure may be more detrimental to younger than older people. The percentage of males in the exposed group also moderated the effect of pesticide exposure on Block Design (B=0.01, p=0.01) and Finger Tapping non-preferred hand (B=0.02, p=0.01). This result shows that pesticide exposure may be less detrimental to male agricultural workers than female workers. However, it is important to note that the majority of participants in these studies were male.

To explore the sensitivity of these results to the inclusion of each specific cohort included in the analyses, we reanalysed the available data by excluding one cohort at a time. The presence of four specific cohorts changed the significance of four effect sizes.9 12 20 22 Exclusion of Rohlman et al,12 cohort 17, transformed the effect size of Match to Sample from non-significant (table 3) to significant (−0.44, p=0.009). When both the studies of Fiedler et al,20 cohort 10, and Maizlish et al,22 cohort 14, were excluded one at a time, the non-significant effect size of Continuous Performance hit latency became significant (−0.2, p=0.03). Also, the non-significant mean effect size of Finger Tapping preferred hand changed to significant (−0.2, p=0.008) when the study of Rohlman et al,11 cohort 16, was excluded. In contrast, exclusion of Kamel et al,9 cohort 12, changed the significant effect size of Finger Tapping non-preferred hand to non-significant (−0.04, p=0.59). Thus in most cases, the exclusion of a particular cohort led to a change in the mean effect size from non-significant to significant.

Assessment of the quality of these 17 studies was conducted using a modified version of the Newcastle-Ottawa scale.34 On this scale, lower quality studies receive a higher score. Ratings of study quality were entered into the meta-regression model as a covariate. Study quality was found to be a significant predictor only for BVRT, Reaction Time latency and Block Design. In each of these three cases, study quality was a positive predictor (regression coefficients of 0.12, 0.16 and 0.12 with p values of 0.02, 0.03 and 0.03, respectively), indicating that studies of poorer quality were associated with finding weaker effects of pesticide exposure on these three tests.


Our study is the first to use meta-analysis to quantify the neurobehavioral effects associated with low-level pesticide exposure among agriculture workers and pesticide applicators. Our results strengthen existing findings reported by previously published narrative reviews examining this exposure–outcome relationship.10 15 Additional information about the magnitude of the impact of chronic low-level exposure to OPs is presented. Only two studies were found that examined neurobehavioral performance among workers exposed to pesticides other than OPs: fungicides35 and DDT.36 For practical reasons, these were excluded from the current analysis.

All tests and measures of attention, visuomotor integration, verbal abstraction and perception showed significant decrements on neurobehavioral performance in the exposed group as compared to the control group. Furthermore, one out of three tests of memory, two out of five tests of sustained attention, and four out of seven tests of motor speed also showed significant decrements in performance. These results demonstrate the sensitivity and validity of these measures in identifying the neurobehavioral effects of exposure to pesticides. The significant effect sizes are small or moderately large according to Cohen's classification,37 ranging from −0.16 to −0.71.

Publication bias was assessed using the fail safe N formula33 that calculates the number of non-significant unpublished studies that would change a significant effect size for a measure to a non-significant one. K0 ranged from 6 to 46, with the majority of measures having a K0 of 17 or higher. It is therefore unlikely that these significant effects sizes were found because journal editors were more inclined to publish studies that had reported neurobehavioral performance deficits in organophosphate exposed workers. The findings of our study also confirmed, by the positive significant effect of study quality on the effect sizes for three of the tests (ie, BVRT, Reaction Time latency and Block Design) that studies of poorer quality were associated with finding weaker effects of pesticide exposure on these tests.

Our findings are consistent with those of other studies that examined the association of clinical neurological outcomes with chronic organophosphate exposure. Some of these studies demonstrated central nervous system manifestations such as headache, fatigue, tension, irritability, insomnia, dizziness, depression, nausea, absentmindedness, difficulty concentrating, loss of appetite and poor balance,16 38 39 whereas others demonstrated peripheral nervous system manifestations such as abnormalities in the knee and ankle reflexes, coordination abnormalities, numbness, twitches in arms or legs, tremors in hands, blurred vision, and change in smell or taste.40–42

Exposure categories (agricultural workers or pesticide applicators) and test administration methods (computerised or non-computerised) were further examined to determine whether variations in these factors may have caused significant heterogeneous effect size distribution. Despite the presence of non-significant mean effect size differences between agricultural workers and applicator groups (p>0.05; table 4, last column), significant effect size heterogeneity was still observed within both farmworkers and applicators for most of the tests (homogeneity by group p<0.05; table 4). The only exceptions were for agricultural workers on Trail Making A and B and Block Design tests, and for applicators on Finger Tapping non-preferred hand, and for both groups on the Digit Symbol test (p>0.05). However, the small number of cohorts in the farmworker and applicator groups may have exacerbated or masked this heterogeneity. It is important to note that the number of cohorts did not exceed four for most of the outcome measures.

Other quantitative moderators, such as duration of exposure, age of the exposed groups and percentage of males in the exposed groups, were also examined. Only the Block Design test was negatively affected by the duration of exposure and positively affected by the age of the participants. Although many other studies report a negative relationship between duration of exposure and performance on various neurobehavioral tests,7 12 16 17 20 we did not replicate these findings with other outcome measures. This may be due to the inherent character of the meta-analysis, where the mean duration of exposure for each cohort was reported without reflecting the variation of exposure duration within each cohort. The positive prediction of Block Design by age indicates that as age increases, neurobehavioral performance decrements decrease, which is consistent with other studies examining the impact of pesticide exposure across age categories. For example, Eckerman et al demonstrated that the younger age category of 11–12 years showed greater impairment in neurobehavioral performance compared to older adolescents.17

We note that a small number of cohorts were often used to calculate effect sizes for a number of outcome measures. To test the sensitivity of our results to specific cohorts, we reran the analyses excluding one cohort at a time. For the Match to Sample test, only three cohorts were available. The effect size from the adolescent participants in Rohlman et al,12 cohort 17, was 0.22. This positive effect size indicated that exposed participants did better than control participants. After excluding this cohort from the analysis, a significant negative effect size (−0.44, p=0.009) was obtained. In the case of the Finger Tapping preferred hand outcome measure, the overall mean effect size was −0.11, with p>0.05. Even though 10 cohorts were included in the calculation of this effect size, after exclusion of Rohlman et al11 (which had an effect size of 0.45), the overall mean effect size became −0.2, with p=0.008. A contrasting situation was observed with the Continuous Performance latency measure, where we observed a highly moderate effect size of −0.75. Excluding Kamel et al9 changed the overall significant effect size of the −0.16 to a non-significant one of −0.04. Furthermore, there was a significant heterogeneity in the distribution of the effect sizes for that outcome measure (table 4, last column), where they ranged from negative (−0.61; Kamel et al9), to close to zero (0.007; Rohlman et al12) to positive effect sizes (0.37; Rohlman et al11). Due to this heterogeneity in the distribution, excluding the relatively large effect size of Kamel et al9 changed the significance of the overall mean effect size. Thus, some of the effect size heterogeneity appears due to somewhat discrepant effect sizes from various cohorts. In most cases, the mean effect sizes more strongly indicate the negative effects of pesticide exposure when these effect sizes are included in the analysis.

Use of either qualitative or quantitative moderating variables for identifying the factors associated with the heterogeneity of effect sizes of the neurobehavioral outcome measures was largely unsuccessful. Our failure to find a set of moderator variables that account for all of the effect size differences across the cohorts may be explained by the variations in methods, population age groups and exposure levels across studies. For example, some studies examined adolescents,14 16 17 while others examined adults.7 9 22 The types of occupation varied remarkably from study to study. Various job tasks included students working in fields as applicators on a part time basis,17 seasonal workers who work in pesticide application during the summer,16 22 farmers working full time,1 9 sheep farmers,19 greenhouse workers,14 technicians and mechanics,7 and applicators.18 Furthermore, duration of exposure varied from a few years to decades across the cohorts.

In conclusion, chronic low-level exposure to pesticides of agricultural workers and pesticide applicators has a significant impact on neurobehavioral performance as described across all neurobehavioral constructs. Dividing the cohorts into agricultural workers and applicators or including the duration of exposure, age of the exposed groups and percentage of males in the regression model largely did not help explain the differences in results across cohorts. Only for Block Design was it found that as duration of exposure increased, performance decreased. However, it was also found that the performance decrements were more significant among younger cohorts.

Limitations and suggestions for future research

The primary limitation of our meta-analysis is the small number of studies and cohorts. Furthermore, not all neurobehavioral measures are used in all studies and cohorts. As a result, for many measures few cohorts were available to calculate the mean effect sizes. Greater precision would be achieved by the inclusion of additional studies. The second important limitation is the inconsistencies in exposure classification among the cohorts, ranging from purely categorical classification (agricultural workers/applicators vs controls)5 8 11 18 19 to more quantitative indices of exposure level including measurement of exposure across time. This is important as studies have demonstrated that years of cumulative exposure are associated with deficits.7 9 12 14 16 20 Few studies included measures of biomarkers of exposure and/or effect, for example, urinary metabolites or blood cholinesterase activity,1 13 17 32 although these have not been reliably associated with neurobehavioral effects in studies of human occupational or environmental exposures or in chronic low-dose animal studies.43–45 The small numbers of studies reporting biomarker data did not allow us the opportunity to include biomarker data in the analysis.

In addition to limited information about exposure, only a few studies reported the use of personal protective equipment (PPE), which is an important determinant of exposure level. The literature suggests that PPE use is associated with reduced exposure to pesticides,35 although our analysis did not address this issue. Only three studies included in our analysis reported information on PPE use, with the percentage of PPE use varying from 5% to 13% of participants using PPE during pesticide application.5 7 12 Examining and reporting the impact of PPE on farmworkers and pesticide applicators would help determine its role in preventing or reducing health effects associated with pesticide exposure.


View Abstract
  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:


  • Funding This work was supported by the National Institute of Environmental Health Sciences (NIEHS, R21 ES017223, Rohlman). The content is solely the authors' responsibility and does not necessarily represent the official views of the NIEHS.

  • Competing interests The Oregon Health and Sciences University (OHSU) and DS Rohlman have a significant financial interest in Northwest Education Training and Assessment, LLC, a company that may have a commercial interest in the results of this research and technology. This potential individual and institutional conflict of interest has been reviewed and managed by OHSU (3-2010).

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.