METHODS: The anonymized data received from the Canadian Chiropractic Examining Board for its June 2005 Clinical Skills Examination were analyzed with generalizability theory. Variance components were estimated with SPSS 11.5 (SPSS Inc, Chicago, Ill) as partially nested data. The data included 182 candidates, 43 raters, 40 standardized patient actors, and 18 individual cases.
RESULTS: Internal consistency estimates (Cronbach alpha) were .86 for day 1 and .91 for day 2. The alpha estimates for stations averaged .68 for day 1 and .74 for day 2. The generalizability-coefficient for the day 1 exam was .65 and for the day 2 was .42. G-coefficients for stations averaged .63 for day 1 and .74 for day 2. On day 1, the raters contributed 7% of the variance, and on day 2, the raters contributed 8%.
CONCLUSIONS: Generalizability theory can contribute to the understanding of sources of variance and provide direction for the improvement of individual stations. The size of the rater variance in a station may also indicate the need for increased training in that station or the need to make the scoring checklist more clear and definitive. Generalizability theory, however, must be cautiously applied, and it requires careful selection of the floating raters and vigorous training of the raters in each station.
Click on the above link for the PubMed record for this article; full text by subscription. This abstract is reproduced with the permission of the publisher. DOI Link