Inter-rater Reliability

When using evaluation instruments like the Evaluating Healthcare Simulation tools that require observers to rate something, such as a learner’s performance, communication between learner and patient, or team performance, it is essential that the raters interpret the instrument the same way. Each person who is rating a learner should agree on how the instrument is used and what each item on the tool means. Additionally, what constitutes an adequate performance that allows the rater to mark completed or not completed? If assigning a performance to a competency level, what differentiates a learner’s performance between novice and competent?

In order to answer questions such as these, raters must practice using the instrument and then compare their markings to each other. This should be repeated until a pre-determined level of agreement is reached. Without this step, assessments can have unfair outcomes for the learners.

To assist with this process, simulation scenarios are often used. The videos below, provided by Robert Morris University, can be used to establish inter-rater reliability when using an instrument to assess how learners perform in a patient care scenario.


Sponsored Content:


Thank you Robert Morris University for supplying the video links!

Return to the Evaluating Healthcare Simulation tools webpage.


Sponsored Content: