RT Journal Article SR Electronic T1 Inter-rater agreement in the assessment of exposure to carcinogens in the offshore petroleum industry JF Occupational and Environmental Medicine JO Occup Environ Med FD BMJ Publishing Group Ltd SP 582 OP 588 DO 10.1136/oem.2006.030528 VO 64 IS 9 A1 Steinsvåg, Kjersti A1 Bråtveit, Magne A1 Moen, Bente E A1 Kromhout, Hans YR 2007 UL http://oem.bmj.com/content/64/9/582.abstract AB Objectives: To evaluate the reliability of an expert team assessing exposure to carcinogens in the offshore petroleum industry and to study how the information provided influenced the agreement among raters. Methods: Eight experts individually assessed the likelihood of exposure for combinations of 17 carcinogens, 27 job categories and four time periods (1970–1979, 1980–1989, 1990–1999 and 2000–2005). Each rater assessed 1836 combinations based on summary documents on carcinogenic agents, which included descriptions of sources of exposure and products, descriptions of work processes carried out within the different job categories, and monitoring data. Inter-rater agreement was calculated using Cohen’s kappa index and single and average score intraclass correlation coefficients (ICC) (ICC(2,1) and ICC(2,8), respectively). Differences in inter-rater agreement for time periods, raters, International Agency for Research on Cancer groups and the amount of information provided were consequently studied. Results: Overall, 18% of the combinations were denoted as possible exposure, and 14% scored probable exposure. Stratified by the 17 carcinogenic agents, the probable exposure prevalence ranged from 3.8% for refractory ceramic fibres to 30% for crude oil. Overall mean kappa was 0.42 (ICC(2,1) = 0.62 and ICC(2,8) = 0.93). Providing limited quantitative measurement data was associated with less agreement than for equally well described carcinogens without sampling data. Conclusion: The overall κ and single-score ICC indicate that the raters agree on exposure estimates well above the chance level. The levels of inter-rater agreement were higher than in other comparable studies. The average score ICC indicates reliable mean estimates and implies that sufficient raters were involved. The raters seemed to have enough documentation on which to base their estimates, but provision of limited monitoring data leads to more incongruence among raters. Having real exposure data, with the inherent variability of such data, apparently makes estimating exposure in a rigid semiquantitative manner more difficult.