Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education

Introduction Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to...

Full description

Saved in:
Bibliographic Details
Main Authors: Jorie M. Colbert-Getz (Author), Michael Ryan (Author), Erin Hennessey (Author), Brenessa Lindeman (Author), Brian Pitts (Author), Kim A. Rutherford (Author), Deborah Schwengel (Author), Stephen M. Sozio (Author), Jessica George (Author), Julianna Jung (Author)
Format: Book
Published: Association of American Medical Colleges, 2017-05-01T00:00:00Z.
Subjects:
Online Access:Connect to this object online.
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000 am a22000003u 4500
001 doaj_f2398a1e723d4927b404ad7c60ad097c
042 |a dc 
100 1 0 |a Jorie M. Colbert-Getz  |e author 
700 1 0 |a Michael Ryan  |e author 
700 1 0 |a Erin Hennessey  |e author 
700 1 0 |a Brenessa Lindeman  |e author 
700 1 0 |a Brian Pitts  |e author 
700 1 0 |a Kim A. Rutherford  |e author 
700 1 0 |a Deborah Schwengel  |e author 
700 1 0 |a Stephen M. Sozio  |e author 
700 1 0 |a Jessica George  |e author 
700 1 0 |a Julianna Jung  |e author 
245 0 0 |a Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education 
260 |b Association of American Medical Colleges,   |c 2017-05-01T00:00:00Z. 
500 |a 10.15766/mep_2374-8265.10588 
500 |a 2374-8265 
520 |a Introduction Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. Methods We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). Results Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. Discussion The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders. 
546 |a EN 
690 |a Editor's Choice 
690 |a Validity 
690 |a Assessment 
690 |a Medicine (General) 
690 |a R5-920 
690 |a Education 
690 |a L 
655 7 |a article  |2 local 
786 0 |n MedEdPORTAL, Vol 13 (2017) 
787 0 |n http://www.mededportal.org/doi/10.15766/mep_2374-8265.10588 
787 0 |n https://doaj.org/toc/2374-8265 
856 4 1 |u https://doaj.org/article/f2398a1e723d4927b404ad7c60ad097c  |z Connect to this object online.