Statistics from Altmetric.com
Improvement of quality of care is regarded as a challenge for the medical profession,1 and the development of guidelines is seen as one way of approaching this. Guidelines aim to change the performance of doctors, yet it is obvious that they are also important for education. They provide a clear framework for how to act in practice. Performance indicators can be used to test whether medical performance complies with the guidelines, and thus provide good evaluation of the effects of an educational programme.2There is a need for evaluation of teaching programmes for occupational physicians, which hopefully will lead to more evidence based education.3 4 For this reason, a postgraduate educational programme for occupational physicians on guidelines for rehabilitation of patients with low back pain was evaluated. The research questions were whether the educational programme increased knowledge of the guidelines and whether the programme improved performance of the workers in practice.
The educational programme was evaluated by testing knowledge and performance indicators at the four levels of Miller's pyramid of clinical assessment.5 The content of the educational programme was based on Dutch guidelines for occupational physicians and general practitioners.6 7 The educational programme took a day and a half within a module of 6 days on work rehabilitation. An experimental group of 25 physicians participated in the educational programme. A reference group of 20 physicians did not, but took part in the programme 6 months later. The groups consisted of physicians of two consecutive year groups and were similar for possible confounders. We assessed the knowledge and performance of both groups by testing before and after education.
Subjects took the paper and pencil knowledge test in the classroom. The tests consisted of 45 true or false questions. Cronbach's α for reliability of the tests was 0.65 before and 0.60 after the test.
We used 12 performance indicators for the assessment of compliance with the guidelines on work rehabilitation.2 We asked the physicians to randomly collect, from their own practice, five patients with low back pain before the educational programme and five patients after. The physicians reported their performance for each case on specially designed forms. From these forms one researcher scored to what extent the physicians had followed the guidelines based on criteria which indicate the difference between good and bad performance.8 For every physician we calculated a mean percentage of the performance scores before and after the programme.
Differences in scores before and after (gain scores) were tested with a paired samples t test or independent samplest test where appropriate. To evaluate the possible influence of differences in performance scores at baseline we used analysis of covariance. We calculated adjusted gain scores by subtracting the gain score of the reference group from the gain score of the experimental group.
The knowledge test scores at time 1 were low (table). With true or false questions a score of about 80% is supposed to be significantly different from that which can be attained by chance alone. The gain scores were positive for both, the experimental and the reference group. The adjusted gain score was 9% which is significantly different from zero.
For the performance indicators the gain score was positive for the experimental group only. The adjusted gain score was 15% and significantly different from zero. The results of an analysis of covariance showed a significant effect for the experimental group (F=25.3, p<0.0001) when performance scores before the test were entered as a covariate.
In this study there was an increase in knowledge and an improvement of performance in practice after an educational programme on low back pain. These results could be biased in several ways. Firstly, we did not use a randomised study design and the groups might differ in other aspects than just the educational programme—such as learning style or location and time of testing. However, these differences would be more likely to bias the results in the direction of not finding a difference between the two groups. The increase in score on the knowledge test of the reference group is probably due to taking the test twice. The adjusted gain scores take this effect into account. Therefore we conclude that the increases in knowledge and performance can be ascribed to the educational programme.
The validity of the knowledge test was sufficient for evaluation purposes with an internal consistency of 0.65 and 0.60 (Cronbach's α). The performance indicators were based on a physician's self report. However, the participants did not know the criteria we used to assess deviation from the guidelines. It was shown that performing well as measured by the same indicators predicted a better outcome and patient satisfaction.9 Therefore, we conclude that the performance scores do reflect performance in practice.
We conclude that this study provides evidence for an increase in quality of care after occupational physicians participated in our postgraduate programme.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.