METHODS: A test-retest single group design was used to investigate the intrarater and interrater reliability of 22 lower quarter evaluation measures. Two raters conducted each measure twice on a total of 18 unimpaired subjects with an average age of 23.7 years. This study was conducted in the Human Performance Research laboratory in a university setting. Intraclass correlation coefficients were used to assess reliability of continuous variables, and weighted kappa was used to assess nominal or ordinal results.
RESULTS: Side differences were not found (P > .05); thus, data for right and left legs were pooled (n = 36) where applicable. Intraclass correlation coefficient and weighted kappa results ranged from a low of 0.06 to a high of 0.99. Intrarater reliability results were generally higher than interrater reliability results.
CONCLUSION: Many of the clinical measures demonstrated good overall reliability. For those tests where acceptable intrarater and interrater reliability cannot be demonstrated, additional training of raters, modification of the technique, or elimination of the technique's use should be considered.
This abstract is reproduced with the permission of the publisher. Click on the above link for the PubMed record for this article; full text by subscription.