Virtual Simulation Effectiveness Tool for Healthcare Education
According to INACSL standards, the method of participant evaluation should be established before the simulation-based experience occurs. But how do we accomplish this when working digital learning environments, either through screen based simulated patients or through more experiential virtual reality environments? Today Dr. Kim Baily PhD, MSN, RN, CNE, previous Simulation Coordinator for Los Angeles Harbor College and Director of Nursing for El Camino College, takes a look at a modified version of the Simulation Effectiveness Tool (called SET-M) specifically redeveloped for using in medical simulation and nursing simulation VR experiences.
Why the Simulation Effectiveness Tool – Modified (SET-M) was Developed
The original Simulation Effectiveness Tool was created as part of the Program for Nursing Curriculum Integration (PNCI) when it was developed by METI (now CAE Healthcare) in 2005. However, over the next 10 years, simulation as a pedagogy changed considerably, best practices were developed, and terminology was refined. We decided that the tool required updating in order to capture the desired outcomes–our learners’ perceptions of how well we were meeting their learning needs in the simulation environment. Therefore, we set off to create the Simulation Effectiveness Tool – Modified.
The Evaluating Healthcare Simulation website was created to provide healthcare simulation educators and researchers with freely available instruments (tools) that were developed for evaluating different aspects of simulation-based education (SBE). The website’s creators and contributors ( (Leighton, Gilbert, Mudra, Foisy-Doll, Ravert, Foronda, Bauman, Sanko, Gattamorta, Birnbach, Shekhter, and Gu) believe these instruments should be freely available and you can access them though the Obtain Instrument page.
Using SET-M For Virtual Reality Simulation
The clinical simulation evaluation may be either formative or summative. Formative evaluation fosters personal and professional development, to assist the participant in progression toward achieving objectives or outcomes. Summative evaluation focuses on the measurement of outcomes or achievement of the objectives at a discrete moment in time, often at the end of a program of study. A decision must be made as to whether students will rate their own experiences or whether an educator will rate the learner’s behavior during the simulation or a combination of both.
In addition, if the evaluation is to be completed by an educator, the evaluation should be completed by trained, unbiased objective raters or evaluators using a comprehensive tool (i.e., checklist or rubric that clearly outlines desirable and undesirable behaviors). This is particularly true for high stakes evaluations. Faculty evaluating learners should use a rubric which contains a list of evidence-based practice standards relevant to the learning objectives.
SET-M Evaluation Tool
In 2005 Meti (now CAE Healthcare) created an instrument to measure simulation effectiveness as part of a nursing curriculum integration project. As the Nursing Simulation field grew, changes were made to the tool which developments in simulation pedagogy. A revised version of the tool named the Simulation Effectiveness Tool – Modified or SET-M was developed. (Leighton, K., Ravert, P., Mudra, V., & Macintosh, C. (2015). Update the Simulation Effectiveness Tool: Item modifications and reevaluation of psychometric properties. Nursing Education Perspectives, 36(5), 317-‐323. Doi: 10.5480/1 5-‐1671).
The SET-M is based on three sets of standards: INACSL Standards of Best Practice, Quality and Safety Education for Nurses (QSEN) and Essentials of Baccalaureate Education for Professional Nursing Practice (American Association of Colleges of Nursing, 2008). The SET-M’s reliability and validity study was completed with 1288 students from two universities at 13 different sites. Factor analysis was completed in an attempt to explain correlations among multiple outcomes as the result of one or more underlying explanations, or factors.
The four factors identified were prebriefing, learning, confidence and debriefing. Cronbach’s alpha was used to measure the reliability of the tool. Chronbach’s alpha is the most common measure of internal consistency (“reliability”). It is most commonly used when multiple Likert questions are used in a survey/questionnaire that form a scale and a determination of the scale’s reliability is needed. A value above 0.7 is considered good. The Chronbach’s alpha for each of the factors in the SET-M ranged between 0.83 and 0.91. The overall alpha was 0.94.
The SET-M may be used to evaluate the learner’s perception of the effectiveness of a simulation to help the learner meet their learning needs. Ideally the SET-M should be administered after every simulation until consistent results are achieved or unless the simulation changes. If any responses in SET-M are low, faculty should focus on improving their strategies to improve scores.
The educator should focus on how they can change the experience to better meet the learning needs of the students. The SET-M can be used as part of faculty evaluation if live debriefing is part of the virtual learning experience. Note, SET-M has been used in both medical simulation and medical simulation as well as other healthcare professions.
Call For Research Support: CLECS 2.0 Recruitment Information
With the sudden changes brought about by the COVID-19 pandemic, there was an unexpected disruption to your clinical experiences. Instead of caring for patients in the clinical setting, or in the simulation lab, you were forced to provide care in screen-based simulation environments. Kim Leighton, PhD, RN, ANEF, FAAN and Colette Foisy-Doll, MSN, RN are asking for your help to answer questions, so they understand your perceptions of how well your learning needs have been met.
Please support them in their research here which is anticipated to take 15 minutes or less to complete the survey. It is online and there is no identifying information collected, meaning your answers are anonymous and will not be linked to you or your school. Informed consent is provided for you to review, and you can freely choose whether to participate or opt-out by closing your browser at any time. This research study has been approved by the Research Ethics Board of MacEwan University, Edmonton, Alberta, Canada. If you are in nursing school anywhere in the world and have cared for at least one human patient in a traditional clinical environment, one simulation patient in the f2f simulation environment, and one screen-based patient since January 1, 2020, we would appreciate your participation.
Have a story to share with the global healthcare simulation community? Submit your simulation news and resources here!
Dr. Kim Baily, MSN, PhD, RN, CNE has had a passion for healthcare simulation since she pulled her first sim man out of the closet and into the light in 2002. She has been a full-time educator and director of nursing and was responsible for building and implementing two nursing simulation programs at El Camino College and Pasadena City College in Southern California. Dr. Baily is a member of both INACSL and SSH. She serves as a consultant for emerging clinical simulation programs and has previously chaired Southern California Simulation Collaborative, which supports healthcare professionals working in healthcare simulation in both hospitals and academic institutions throughout Southern California. Dr. Baily has taught a variety of nursing and medical simulation-related courses in a variety of forums, such as on-site simulation in healthcare debriefing workshops and online courses. Since retiring from full time teaching, she has written over 100 healthcare simulation educational articles for HealthySimulation.com while traveling around the country via her RV out of California.