Abstract
Purpose:
To assess whether interdisciplinary research evaluation scores vary between fields.
Design/methodology/approach:
The authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.
Findings:
The authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.
Research limitations/implications:
The results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.
Practical implications:
The results underline the importance of choosing relevant fields for any type of research evaluation.
Originality/value:
This is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.
To assess whether interdisciplinary research evaluation scores vary between fields.
Design/methodology/approach:
The authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.
Findings:
The authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.
Research limitations/implications:
The results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.
Practical implications:
The results underline the importance of choosing relevant fields for any type of research evaluation.
Originality/value:
This is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.
Original language | English |
---|---|
Pages (from-to) | 1514-1531 |
Number of pages | 18 |
Journal | Journal of Documentation |
Volume | 79 |
Issue number | 6 |
Early online date | 27 Apr 2023 |
DOIs | |
Publication status | Published - 24 Oct 2023 |
Keywords
- Research quality
- Interdisciplinary research
- Research evaluation
- REF2021