Improving student selection using multiple mini-interviews with multifaceted rasch modeling

Hettie Till, Carol Myford, Jonathan Dowell

    Research output: Contribution to journalArticle

    18 Citations (Scopus)

    Abstract

    PURPOSE: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. METHOD: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. RESULTS: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69% of the total variance in the ratings (candidates 16.01%, examiners 11.32%, and stations 4.36%). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6% of the candidates. CONCLUSIONS: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. "Fair average" scores should be used for ranking the candidates.
    Original languageEnglish
    Pages (from-to)216-223
    Number of pages8
    JournalAcademic Medicine: Journal of the Association of American Medical Colleges
    Volume88
    Issue number2
    DOIs
    Publication statusPublished - 2013

    Fingerprint

    examiner
    candidacy
    interview
    student
    rating
    ranking
    staff
    quality control
    data processing program
    fairness
    school
    monitoring
    ability

    Cite this

    @article{9814511de01445b6a462006c36cd3c2b,
    title = "Improving student selection using multiple mini-interviews with multifaceted rasch modeling",
    abstract = "PURPOSE: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. METHOD: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. RESULTS: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69{\%} of the total variance in the ratings (candidates 16.01{\%}, examiners 11.32{\%}, and stations 4.36{\%}). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6{\%} of the candidates. CONCLUSIONS: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. {"}Fair average{"} scores should be used for ranking the candidates.",
    author = "Hettie Till and Carol Myford and Jonathan Dowell",
    note = "Copyright 2013 Elsevier B.V., All rights reserved.",
    year = "2013",
    doi = "10.1097/ACM.0b013e31827c0c5d",
    language = "English",
    volume = "88",
    pages = "216--223",
    journal = "Academic Medicine: Journal of the Association of American Medical Colleges",
    issn = "1040-2446",
    publisher = "Association of American Medical Colleges",
    number = "2",

    }

    Improving student selection using multiple mini-interviews with multifaceted rasch modeling. / Till, Hettie; Myford, Carol; Dowell, Jonathan.

    In: Academic Medicine: Journal of the Association of American Medical Colleges, Vol. 88, No. 2, 2013, p. 216-223.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Improving student selection using multiple mini-interviews with multifaceted rasch modeling

    AU - Till, Hettie

    AU - Myford, Carol

    AU - Dowell, Jonathan

    N1 - Copyright 2013 Elsevier B.V., All rights reserved.

    PY - 2013

    Y1 - 2013

    N2 - PURPOSE: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. METHOD: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. RESULTS: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69% of the total variance in the ratings (candidates 16.01%, examiners 11.32%, and stations 4.36%). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6% of the candidates. CONCLUSIONS: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. "Fair average" scores should be used for ranking the candidates.

    AB - PURPOSE: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. METHOD: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. RESULTS: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69% of the total variance in the ratings (candidates 16.01%, examiners 11.32%, and stations 4.36%). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6% of the candidates. CONCLUSIONS: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. "Fair average" scores should be used for ranking the candidates.

    UR - http://www.scopus.com/inward/record.url?scp=84873411567&partnerID=8YFLogxK

    U2 - 10.1097/ACM.0b013e31827c0c5d

    DO - 10.1097/ACM.0b013e31827c0c5d

    M3 - Article

    VL - 88

    SP - 216

    EP - 223

    JO - Academic Medicine: Journal of the Association of American Medical Colleges

    JF - Academic Medicine: Journal of the Association of American Medical Colleges

    SN - 1040-2446

    IS - 2

    ER -