January 19, 2024By Teresa Gore

Leighton et al’s Evaluating Healthcare Simulation Tools Now Available Exclusively on HealthySimulation.com

The Evaluating Healthcare Simulation website was organized by Kim Leighton, PhD, RN, CHSOS, CHSE-A, FSSH, ANEF, FAAN to provide healthcare simulation educators and researchers with freely available instruments created by herself and colleagues. While originally housed on a Google site, Dr. Leighton and HealthySimulation.com Founder/CEO Lance Baily are excited to announce all of the tools have been migrated to their new home on HealthySimulation.com! The development teams have now provided permission to HealthySimulation.com to host these open-access tools exclusively through the links below. These instruments were developed for evaluating different aspects of simulation-based education (SBE). All instruments on the page have undergone psychometric testing as valid and reliable evaluation methods for healthcare simulation. This HealthySimulation.com article will provide an overview of the Evaluating Healthcare Simulation instruments.

HealthySimulation.com is proud to host this page as an extremely valuable resource for healthcare simulationists. Founder/CEO Lance Baily shared that “these tools by Leighton et al have been shown to move clinical simulation programs forward around the world – we are thrilled to support the development teams as the new exclusive host”!

The researchers of the healthcare simulation evaluation tools believe evaluation must go well beyond satisfaction and confidence when evaluating SBE as pedagogy. The evaluation instruments have used psychometric testing to establish reliability and validity of data to evaluate healthcare simulation outcomes. These instruments are freely available through the links on each page. From the inception of the initial website in 2018 through 2022, there have been 8420 unique tools downloaded from 89 countries, including all 50 U.S. states and 10 Canadian Provinces.


Sponsored Content:


All Tools Below are Provided with Permission to Use FREELY: General use is already permitted from the creators by posting the statement: I understand that I have been granted permission by the creators of the requested evaluation instrument to use it for academic and/or research purposes. I agree that I will use the evaluation instrument only for its intended use, and will not alter it in any way. I am allowed to place the evaluation instrument into electronic format for data collection. If an official ‘Permission to Use’ letter is required, please contact the primary author of the evaluation instrument. Include the purpose of the official request (research, grant), the intended use of the tool and with what population.

Actions, Communication, & Teaching in Simulation Tool (ACTS) was developed to provide an objective way to evaluate confederates’ contributions to simulation encounters. Given the need to be able to measure all aspects of simulation to make improvements and noting that confederates make errors that impact educational opportunities, Sanko and colleagues (2016) embarked on a quest to design and develop a tool that could measure confederates’ “performance” for the purposes of quality improvement. The ACTS tool is a single-factor, five-item measure using a seven-point behaviorally anchored scale scoring schema designed to objectively measure the performances and portrayal accuracy of confederates playing support roles in simulation scenarios.

Clinical Learning Environment Comparison Survey (CLECS) was developed by Leighton (2015) to evaluate how well learning needs are met in the traditional and simulation undergraduate clinical environments. The CLECS was used in the landmark NCSBN National Simulation Study (Hayden et al., 2014). CLECS is available in both Chinese and Norwegian versions. The CLECS was modified by Leighton et al. (2021) to the CLECS 2.0, which was designed to respond to the changes in simulation delivery during the COVID-19 pandemic for students’ perception of how well the learning needs were met in three environments: traditional clinical environment, face-to-face simulated clinical environment, and screen-based simulation environment. The CLECS 2.0 has now replaced the original CLECS and can be used to compare any two or more clinical learning environments.

Facilitator Competency Rubric (FCR) was developed by Leighton, Mudra, and Gilbert (2018) based on the Healthcare Simulation Standards of Best Practice and Patricia Benner’s (1984) Novice to Expert Theory. The goal of this instrument was to differentiate the varying levels of competency of the healthcare simulation facilitator. There are five constructs: preparation, prebriefing, facilitator, debriefing, and evaluation. The FCR is also available in a German Version.


Sponsored Content:



View the LEARN CE/CME Platform Webinar

ISBAR Interprofessional Communication Rubric (IICR) was developed by Foronda and Bauman (2015) to evaluate student’s communication with a physician and measure students’ communication. The researchers noted that students exhibited difficulty in phone communications to physicians during SBE. Communication with physicians and healthcare providers in the traditional clinical setting may not be permitted for students. SBE is the only opportunity for the students to learn these required skills. This tool was developed for the educator to measure the level of communication performed for the purpose of feedback and instruction.

Quint Leveled Clinical Competency Tool (QLCCT) began when Quint observed a weakness in the Lasater Clinical Judgment Rubric (LCJR). A group of researchers (Quint et al., 2017), collaborated to the Quint tool to address the negative language in the LCJR, especially for novice learners, and the length of the tool for measuring clinical judgment. The rubric measures clinical competence in simulation or the clinical environment.

Simulation Culture Organizational Readiness Survey (SCORS) was developed by Leighton, Foisy-Doll, and Gilbert (2018) to assist administrators in evaluating institutional and program readiness for simulation integration. The SCORS will assist the organizational leadership to better understand the necessary components to address PRIOR to purchasing simulation equipment, with the goal of increasing effective and efficient integration of simulation into the academic or organizational education curriculum.

Simulation Educator Needs Assessment Tool (SENAT) was developed by Britt, Xing, and Leighton (2023) based on a needs assessment and gap analysis for simulation professional development, the need to provide data regarding the simulation professional needs and desire for improvement, and to assist the creation of a professional development roadmap for simulation programs and/or individual simulation educators. The Healthcare Simulation Standard of Best Practice: Professional Development was used as a foundation for this instrument. The SENAT was designed to assess the needs of educators to inform continuing education and orientation requirements.



Simulation Effectiveness Tool – Modified (SET-M) was revised and modified from the Simulation Effectiveness Tool (2005) by CAE Healthcare (formerly known as METI) as part of the Program of Nursing Curriculum Integration. The SET-M is designed for evaluation of clinical simulation scenarios. Leighton, Ravert, Mudra, and Macintosh (2015) updated the SET to incorporate simulation standards of best practices and updated terminology. The researchers determined the tool required updating to capture the desired outcomes – learner’s perceptions of how well their learning needs in the simulation environment were being met. SET-M is available in Turkish and Spanish versions.

The Inter-Rater Reliability Guide videos were developed and provided by Robert Morris University and can be used to establish inter-rater reliability when using an instrument to assess how learners perform in a patient care scenario. Inter-rater reliability must be established to obtain consistent results when implementing simulation evaluation with more than one rater. Simulation scenarios are often used to assist with this process. These five videos can be used to establish inter-rater reliability when using an instrument to assess how learners perform in a patient care scenario.

Evaluating Healthcare Simulation Resource

The goal of the Evaluating Healthcare Simulation instruments was to provide tools that yielded valid and reliable data that can be usedto evaluate all aspects of clinical simulation. These tools are freely accessible and can be used without contacting the developers for permission. This decreases the steps required to implement the tools and increases the adoption of valid, reliable tools to evaluate clinical simulation. These researchers have made an everlasting impact on the simulation community and want quality research to continue. HealthySimulation.com is proud to host this valuable resource for all those involved in clinical simulation.

Learn More About the Evaluating Healthcare Simulation Instruments!


Sponsored Content: