Simulated Patient Role-Plays with Consumers with Lived Experience of Mental Illness Post-Mental Health First Aid Training: Interrater and Test Re-Test Reliability of an Observed Behavioral Assessment Rubric
Mental Health First Aid (MHFA) training teaches participants how to assist people experiencing mental health problems and crises. Observed behavioral assessments, post-training, are lacking, and the literature largely focuses on self-reported measurement of behaviors and confidence. This study explo...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Book |
Published: |
MDPI AG,
2021-01-01T00:00:00Z.
|
Subjects: | |
Online Access: | Connect to this object online. |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Mental Health First Aid (MHFA) training teaches participants how to assist people experiencing mental health problems and crises. Observed behavioral assessments, post-training, are lacking, and the literature largely focuses on self-reported measurement of behaviors and confidence. This study explores the reliability of an observed behavioral assessment rubric used to assess pharmacy students during simulated patient (SP) role-play assessments with mental health consumers. Post-MHFA training, pharmacy students (<i>n</i> = 528) participated in SP role-play assessments (<i>n</i> = 96) of six mental health cases enacted by consumers with lived experience of mental illness. Each assessment was marked by the tutor, participating student, and consumer (three raters). Non-parametric tests were used to compare raters' means scores and pass/fail categories. Interrater reliability analyses were conducted for overall scores, as well as pass/fail categories using intra-class correlation coefficient (ICC) and Fleiss' Kappa, respectively. Test re-test reliability analyses were conducted using Pearson's correlation. For interrater reliability analyses, the intra-class correlation coefficient varied from poor-to-good to moderate-to-excellent for individual cases but was moderate-to-excellent for combined cases (0.70; CI 0.58-0.80). Fleiss' Kappa varied across cases but was fair-to-good for combined cases (0.57, <i>p</i> < 0.001). For test re-test reliability analyses, Pearson's correlation was strong for individual and combined cases (0.87; <i>p</i> < 0.001). Recommended modifications to the rubric, including the addition of barrier items, scoring guides, and specific examples, as well as the creation of new case-specific rubric versions, may improve reliability. The rubric can be used to facilitate the measurement of actual, observed behaviors post-MHFA training in pharmacy and other health care curricula. |
---|---|
Item Description: | 10.3390/pharmacy9010028 2226-4787 |