TY - JOUR
T1 - Development and testing of an assessment instrument for the formative peer review of significant event analyses
AU - Mckay, John
AU - Murphy, D. J.
AU - Bowie, P.
AU - Schmuck, M.-L.
AU - Lough, M.
AU - Eva, K.W.
N1 - MEDLINE® is the source for the MeSH terms of this document.
PY - 2007
Y1 - 2007
N2 - Aim: To establish the content validity and specific aspects of reliability for an assessment instrument designed to provide formative feedback to general practitioners (GPs) on the quality of their written analysis of a significant event. Methods: Content validity was quantified by application of a content validity index. Reliability testing involved a nested design, with 5 cells, each containing 4 assessors, rating 20 unique significant event analysis (SEA) reports (10 each from experienced GPs and GPs in training) using the assessment instrument. The variance attributable to each identified variable in the study was established by analysis of variance. Generalisability theory was then used to investigate the instrument's ability to discriminate among SEA reports. Results: Content validity was demonstrated with at least 8 of 10 experts endorsing all 10 items of the assessment instrument. The overall G coefficient for the instrument was moderate to good (G>0.70), indicating that the instrument can provide consistent information on the standard achieved by the SEA report. There was moderate inter-rater reliability (G>0.60) when four raters were used to judge the quality of the SEA. Conclusions: This study provides the first steps towards validating an instrument that can provide educational feedback to GPs on their analysis of significant events. The key area identified to improve instrument reliability is variation among peer assessors in their assessment of SEA reports. Further validity and reliability testing should be carried out to provide GPs, their appraisers and contractual bodies with a validated feedback instrument on this aspect of the general practice quality agenda.
AB - Aim: To establish the content validity and specific aspects of reliability for an assessment instrument designed to provide formative feedback to general practitioners (GPs) on the quality of their written analysis of a significant event. Methods: Content validity was quantified by application of a content validity index. Reliability testing involved a nested design, with 5 cells, each containing 4 assessors, rating 20 unique significant event analysis (SEA) reports (10 each from experienced GPs and GPs in training) using the assessment instrument. The variance attributable to each identified variable in the study was established by analysis of variance. Generalisability theory was then used to investigate the instrument's ability to discriminate among SEA reports. Results: Content validity was demonstrated with at least 8 of 10 experts endorsing all 10 items of the assessment instrument. The overall G coefficient for the instrument was moderate to good (G>0.70), indicating that the instrument can provide consistent information on the standard achieved by the SEA report. There was moderate inter-rater reliability (G>0.60) when four raters were used to judge the quality of the SEA. Conclusions: This study provides the first steps towards validating an instrument that can provide educational feedback to GPs on their analysis of significant events. The key area identified to improve instrument reliability is variation among peer assessors in their assessment of SEA reports. Further validity and reliability testing should be carried out to provide GPs, their appraisers and contractual bodies with a validated feedback instrument on this aspect of the general practice quality agenda.
UR - http://www.scopus.com/inward/record.url?scp=34247163575&partnerID=8YFLogxK
U2 - 10.1136/qshc.2006.020750
DO - 10.1136/qshc.2006.020750
M3 - Article
C2 - 17403765
AN - SCOPUS:34247163575
SN - 1475-3898
VL - 16
SP - 150
EP - 153
JO - Quality & Safety in Health Care
JF - Quality & Safety in Health Care
IS - 2
ER -