Direct observation of clinical skills feedback scale: development and validity evidence

Samantha Halman (Lead / Corresponding author), Nancy Dudek, Timothy Wood, Debra Pugh, Claire Touchie, Sean McAleer, Susan Humphrey-Murto

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)
450 Downloads (Pure)


Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace.

Background: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use.

Approach: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale.

Results: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5).

Conclusions: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.

Original languageEnglish
Pages (from-to)385-394
Number of pages10
JournalTeaching and Learning in Medicine
Issue number4
Early online date10 Jun 2016
Publication statusPublished - 2016


  • Feedback
  • assessment
  • scale development


Dive into the research topics of 'Direct observation of clinical skills feedback scale: development and validity evidence'. Together they form a unique fingerprint.

Cite this