Measurement and structural invariance of cognitive ability tests after computer-based training

Michael Hermes (Lead / Corresponding author), Frank Albers, Jan Böhnke, Gerrit Huelmann, Julia Maier, Dirk Stelling

    Research output: Contribution to journalArticle

    Abstract

    Ability tests are core elements in performance research as well as in applied contexts and are increasingly carried out using computer-based versions. In the last few decades a whole training and coaching industry has developed to prepare individuals for computer-based assessments. Evidence suggests that such commercial training programs can result in score gains in ability tests, thereby creating an advantage for those who can afford it and challenging the fairness of ability assessment. As a consequence, several authors recommended freely offering training software to all participants to increase measurement fairness. However, it is still an open question whether the unsupervised use of training software could have an impact on the measurement properties of ability tests. The goal of the present study is to fill this gap by examining the subjects’ ability scores for measurement and structural invariance across different amounts of computer-based training. Structural equation modeling was employed in a sample of 15,752 applicants who participated in high-stakes assessments with computer-based ability tests. Across different training amounts, our analyses supported measurement and structural invariance of ability scores. In conclusion, free training software is a means that provides fair preparation opportunities without changing the measurement properties of the tests.
    Original languageEnglish
    Pages (from-to)370-378
    Number of pages9
    JournalComputers in Human Behavior
    Volume93
    Early online date21 Nov 2018
    DOIs
    Publication statusPublished - Apr 2019

    Fingerprint

    Invariance
    Software
    Industry
    Education
    Cognitive Ability
    Research

    Keywords

    • Cognitive ability
    • Computer-based testing
    • Computer-based training
    • Measurement invariance
    • Test fairness

    Cite this

    Hermes, Michael ; Albers, Frank ; Böhnke, Jan ; Huelmann, Gerrit ; Maier, Julia ; Stelling, Dirk. / Measurement and structural invariance of cognitive ability tests after computer-based training. In: Computers in Human Behavior. 2019 ; Vol. 93. pp. 370-378.
    @article{fc57e5c70e3e49aea81c6b82aa994301,
    title = "Measurement and structural invariance of cognitive ability tests after computer-based training",
    abstract = "Ability tests are core elements in performance research as well as in applied contexts and are increasingly carried out using computer-based versions. In the last few decades a whole training and coaching industry has developed to prepare individuals for computer-based assessments. Evidence suggests that such commercial training programs can result in score gains in ability tests, thereby creating an advantage for those who can afford it and challenging the fairness of ability assessment. As a consequence, several authors recommended freely offering training software to all participants to increase measurement fairness. However, it is still an open question whether the unsupervised use of training software could have an impact on the measurement properties of ability tests. The goal of the present study is to fill this gap by examining the subjects’ ability scores for measurement and structural invariance across different amounts of computer-based training. Structural equation modeling was employed in a sample of 15,752 applicants who participated in high-stakes assessments with computer-based ability tests. Across different training amounts, our analyses supported measurement and structural invariance of ability scores. In conclusion, free training software is a means that provides fair preparation opportunities without changing the measurement properties of the tests.",
    keywords = "Cognitive ability, Computer-based testing, Computer-based training, Measurement invariance, Test fairness",
    author = "Michael Hermes and Frank Albers and Jan B{\"o}hnke and Gerrit Huelmann and Julia Maier and Dirk Stelling",
    year = "2019",
    month = "4",
    doi = "10.1016/j.chb.2018.11.040",
    language = "English",
    volume = "93",
    pages = "370--378",
    journal = "Computers in Human Behavior",
    issn = "0747-5632",
    publisher = "Elsevier",

    }

    Measurement and structural invariance of cognitive ability tests after computer-based training. / Hermes, Michael (Lead / Corresponding author); Albers, Frank; Böhnke, Jan; Huelmann, Gerrit; Maier, Julia; Stelling, Dirk.

    In: Computers in Human Behavior, Vol. 93, 04.2019, p. 370-378.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Measurement and structural invariance of cognitive ability tests after computer-based training

    AU - Hermes, Michael

    AU - Albers, Frank

    AU - Böhnke, Jan

    AU - Huelmann, Gerrit

    AU - Maier, Julia

    AU - Stelling, Dirk

    PY - 2019/4

    Y1 - 2019/4

    N2 - Ability tests are core elements in performance research as well as in applied contexts and are increasingly carried out using computer-based versions. In the last few decades a whole training and coaching industry has developed to prepare individuals for computer-based assessments. Evidence suggests that such commercial training programs can result in score gains in ability tests, thereby creating an advantage for those who can afford it and challenging the fairness of ability assessment. As a consequence, several authors recommended freely offering training software to all participants to increase measurement fairness. However, it is still an open question whether the unsupervised use of training software could have an impact on the measurement properties of ability tests. The goal of the present study is to fill this gap by examining the subjects’ ability scores for measurement and structural invariance across different amounts of computer-based training. Structural equation modeling was employed in a sample of 15,752 applicants who participated in high-stakes assessments with computer-based ability tests. Across different training amounts, our analyses supported measurement and structural invariance of ability scores. In conclusion, free training software is a means that provides fair preparation opportunities without changing the measurement properties of the tests.

    AB - Ability tests are core elements in performance research as well as in applied contexts and are increasingly carried out using computer-based versions. In the last few decades a whole training and coaching industry has developed to prepare individuals for computer-based assessments. Evidence suggests that such commercial training programs can result in score gains in ability tests, thereby creating an advantage for those who can afford it and challenging the fairness of ability assessment. As a consequence, several authors recommended freely offering training software to all participants to increase measurement fairness. However, it is still an open question whether the unsupervised use of training software could have an impact on the measurement properties of ability tests. The goal of the present study is to fill this gap by examining the subjects’ ability scores for measurement and structural invariance across different amounts of computer-based training. Structural equation modeling was employed in a sample of 15,752 applicants who participated in high-stakes assessments with computer-based ability tests. Across different training amounts, our analyses supported measurement and structural invariance of ability scores. In conclusion, free training software is a means that provides fair preparation opportunities without changing the measurement properties of the tests.

    KW - Cognitive ability

    KW - Computer-based testing

    KW - Computer-based training

    KW - Measurement invariance

    KW - Test fairness

    UR - http://www.scopus.com/inward/record.url?scp=85060006114&partnerID=8YFLogxK

    U2 - 10.1016/j.chb.2018.11.040

    DO - 10.1016/j.chb.2018.11.040

    M3 - Article

    VL - 93

    SP - 370

    EP - 378

    JO - Computers in Human Behavior

    JF - Computers in Human Behavior

    SN - 0747-5632

    ER -