Sources of variation in performance on a shared OSCE station across four UK medical schools

Alistair Chesser, Helen Cameron, Phillip Evans, Jennifer Cleland, Kathy Boursicot, Gary Mires

    Research output: Contribution to journalArticle

    21 Citations (Scopus)

    Abstract

    High-stakes undergraduate clinical assessments should be based on transparent standards comparable between different medical schools. However, simply sharing questions and pass marks may not ensure comparable standards and judgements. We hypothesised that in multicentre examinations, teaching institutions contribute to systematic variations in students' marks between different medical schools through the behaviour of their markers, standard-setters and simulated patients.

    We embedded a common objective structured clinical examination (OSCE) station in four UK medical schools. All students were examined by a locally trained examiner as well as by a centrally provided examiner. Central and local examiners did not confer. Pass scores were calculated using the borderline groups method. Mean scores awarded by each examiner group were also compared. Systematic variations in scoring between schools and between local and central examiners were analysed.

    Pass scores varied slightly but significantly between each school, and between local and central examiners. The patterns of variation were usually systematic between local and central examiners (either consistently lower or higher). In some cases scores given by one examiner pair were significantly different from those awarded by other pairs in the same school, implying that other factors (possibly simulated patient behaviour) make a significant difference to student scoring.

    Shared undergraduate clinical assessments should not rely on scoring systems and standard setting which fail to take into account other differences between schools. Examiner behaviour and training and other local factors are important contributors to variations in scores between schools. The OSCE scores of students from different medical schools should not be directly compared without taking such systematic variations into consideration.

    Original languageEnglish
    Pages (from-to)526-532
    Number of pages7
    JournalMedical Education
    Volume43
    Issue number6
    DOIs
    Publication statusPublished - Jun 2009

    Keywords

    • PASSING SCORES
    • GRADUATION

    Cite this

    Chesser, Alistair ; Cameron, Helen ; Evans, Phillip ; Cleland, Jennifer ; Boursicot, Kathy ; Mires, Gary. / Sources of variation in performance on a shared OSCE station across four UK medical schools. In: Medical Education. 2009 ; Vol. 43, No. 6. pp. 526-532.
    @article{af3b042fe89440cf9eb62ea777606305,
    title = "Sources of variation in performance on a shared OSCE station across four UK medical schools",
    abstract = "High-stakes undergraduate clinical assessments should be based on transparent standards comparable between different medical schools. However, simply sharing questions and pass marks may not ensure comparable standards and judgements. We hypothesised that in multicentre examinations, teaching institutions contribute to systematic variations in students' marks between different medical schools through the behaviour of their markers, standard-setters and simulated patients.We embedded a common objective structured clinical examination (OSCE) station in four UK medical schools. All students were examined by a locally trained examiner as well as by a centrally provided examiner. Central and local examiners did not confer. Pass scores were calculated using the borderline groups method. Mean scores awarded by each examiner group were also compared. Systematic variations in scoring between schools and between local and central examiners were analysed.Pass scores varied slightly but significantly between each school, and between local and central examiners. The patterns of variation were usually systematic between local and central examiners (either consistently lower or higher). In some cases scores given by one examiner pair were significantly different from those awarded by other pairs in the same school, implying that other factors (possibly simulated patient behaviour) make a significant difference to student scoring.Shared undergraduate clinical assessments should not rely on scoring systems and standard setting which fail to take into account other differences between schools. Examiner behaviour and training and other local factors are important contributors to variations in scores between schools. The OSCE scores of students from different medical schools should not be directly compared without taking such systematic variations into consideration.",
    keywords = "PASSING SCORES, GRADUATION",
    author = "Alistair Chesser and Helen Cameron and Phillip Evans and Jennifer Cleland and Kathy Boursicot and Gary Mires",
    year = "2009",
    month = "6",
    doi = "10.1111/j.1365-2923.2009.03370.x",
    language = "English",
    volume = "43",
    pages = "526--532",
    journal = "Medical Education",
    issn = "0308-0110",
    publisher = "Wiley",
    number = "6",

    }

    Sources of variation in performance on a shared OSCE station across four UK medical schools. / Chesser, Alistair; Cameron, Helen; Evans, Phillip; Cleland, Jennifer; Boursicot, Kathy; Mires, Gary.

    In: Medical Education, Vol. 43, No. 6, 06.2009, p. 526-532.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Sources of variation in performance on a shared OSCE station across four UK medical schools

    AU - Chesser, Alistair

    AU - Cameron, Helen

    AU - Evans, Phillip

    AU - Cleland, Jennifer

    AU - Boursicot, Kathy

    AU - Mires, Gary

    PY - 2009/6

    Y1 - 2009/6

    N2 - High-stakes undergraduate clinical assessments should be based on transparent standards comparable between different medical schools. However, simply sharing questions and pass marks may not ensure comparable standards and judgements. We hypothesised that in multicentre examinations, teaching institutions contribute to systematic variations in students' marks between different medical schools through the behaviour of their markers, standard-setters and simulated patients.We embedded a common objective structured clinical examination (OSCE) station in four UK medical schools. All students were examined by a locally trained examiner as well as by a centrally provided examiner. Central and local examiners did not confer. Pass scores were calculated using the borderline groups method. Mean scores awarded by each examiner group were also compared. Systematic variations in scoring between schools and between local and central examiners were analysed.Pass scores varied slightly but significantly between each school, and between local and central examiners. The patterns of variation were usually systematic between local and central examiners (either consistently lower or higher). In some cases scores given by one examiner pair were significantly different from those awarded by other pairs in the same school, implying that other factors (possibly simulated patient behaviour) make a significant difference to student scoring.Shared undergraduate clinical assessments should not rely on scoring systems and standard setting which fail to take into account other differences between schools. Examiner behaviour and training and other local factors are important contributors to variations in scores between schools. The OSCE scores of students from different medical schools should not be directly compared without taking such systematic variations into consideration.

    AB - High-stakes undergraduate clinical assessments should be based on transparent standards comparable between different medical schools. However, simply sharing questions and pass marks may not ensure comparable standards and judgements. We hypothesised that in multicentre examinations, teaching institutions contribute to systematic variations in students' marks between different medical schools through the behaviour of their markers, standard-setters and simulated patients.We embedded a common objective structured clinical examination (OSCE) station in four UK medical schools. All students were examined by a locally trained examiner as well as by a centrally provided examiner. Central and local examiners did not confer. Pass scores were calculated using the borderline groups method. Mean scores awarded by each examiner group were also compared. Systematic variations in scoring between schools and between local and central examiners were analysed.Pass scores varied slightly but significantly between each school, and between local and central examiners. The patterns of variation were usually systematic between local and central examiners (either consistently lower or higher). In some cases scores given by one examiner pair were significantly different from those awarded by other pairs in the same school, implying that other factors (possibly simulated patient behaviour) make a significant difference to student scoring.Shared undergraduate clinical assessments should not rely on scoring systems and standard setting which fail to take into account other differences between schools. Examiner behaviour and training and other local factors are important contributors to variations in scores between schools. The OSCE scores of students from different medical schools should not be directly compared without taking such systematic variations into consideration.

    KW - PASSING SCORES

    KW - GRADUATION

    U2 - 10.1111/j.1365-2923.2009.03370.x

    DO - 10.1111/j.1365-2923.2009.03370.x

    M3 - Article

    VL - 43

    SP - 526

    EP - 532

    JO - Medical Education

    JF - Medical Education

    SN - 0308-0110

    IS - 6

    ER -