Inter-rater reliability of Objective Structured Long Examination Record
Mehreen Baig; Syed Inamullah Shah; Sajida Shah; Eitezaz Ahmad Bashir; Hajira Sarwar; Jamil Ahmad Shah
Abstract:
Background: Objective Structured Long Examination Record (OSLER) scale was introduced in 1997 by Gleeson to improve the long case examination. There is no psychometric evidence to support reliability of OSLER. This study was done to analyse inter-rater reliability of OSLER. Methods: Two groups of examiners assessed 105 students in long case examination of their final professional examination, using OSLER scale. Group 1 was composed of actual examiners while Group 2 was mock examiners. Kappa statistic and intraclass correlation coefficient (ICC) were used on SPSS 23 to calculate reliability. Results: Mean score awarded by actual examiners was 55.36 (SD=11.2) whereas mean score by mock examiners was 57.74 (SD=14.1). Cronbach’s alpha was 0.586, Kappa was 0.019 whereas inter-rater reliability on ICC was 0.413. Conclusion: Although OSLER is a practical modification of long case examination with good validity, the scale needs to be more structured to improve its reliability.Keywords: Long case; OSLER; Reliability