Abstract
Ability tests are core elements in performance research as well as in applied contexts and are increasingly carried out using computer-based versions. In the last few decades a whole training and coaching industry has developed to prepare individuals for computer-based assessments. Evidence suggests that such commercial training programs can result in score gains in ability tests, thereby creating an advantage for those who can afford it and challenging the fairness of ability assessment. As a consequence, several authors recommended freely offering training software to all participants to increase measurement fairness. However, it is still an open question whether the unsupervised use of training software could have an impact on the measurement properties of ability tests. The goal of the present study is to fill this gap by examining the subjects’ ability scores for measurement and structural invariance across different amounts of computer-based training. Structural equation modeling was employed in a sample of 15,752 applicants who participated in high-stakes assessments with computer-based ability tests. Across different training amounts, our analyses supported measurement and structural invariance of ability scores. In conclusion, free training software is a means that provides fair preparation opportunities without changing the measurement properties of the tests.
Original language | English |
---|---|
Pages (from-to) | 370-378 |
Number of pages | 9 |
Journal | Computers in Human Behavior |
Volume | 93 |
Early online date | 21 Nov 2018 |
DOIs | |
Publication status | Published - Apr 2019 |
Keywords
- Cognitive ability
- Computer-based testing
- Computer-based training
- Measurement invariance
- Test fairness
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction
- General Psychology