INASCL Virtual Conference Day 2 Session Recap: High Stakes Evaluation and In Situ Simulation
The INACSL 2020 Virtual Conference offered its second day of live and prerecorded sessions. Yesterday, Dr. Kim Baily PhD, MSN, RN, CNE, previous Simulation Coordinator for Los Angeles Harbor College and Director of Nursing for El Camino College, covered sessions on RRT Algorithm Development Through Simulation & Context Based Learning. Here today, she covers sessions from day 2 of the International Nursing Association for Clinical Simulation and Learning on topics of evaluating high stakes medical surgical practicums and using healthcare simulation for optimizing nursing responses to in-hospital cardiac arrest events.
Successful Use of Simulation For High-Stakes Evaluation In A Medical Surgical Practicum Course
By Jamie Hansen and Megan Holz from Carroll University, Wisconsin.
Worldwide, safety in practice is among the largest areas of concern for nursing students, instructors, and patients. Simulation is one strategy that can be used to identify gaps in knowledge and safety concerns before they are able to reach the bedside. Use of simulation for high-stakes evaluation is one means of ensuring safe practitioners enter the profession. The INACSL Standards of Best Practice SimulationSM: Participant Evaluation (2016) present criteria for nurse educators to adopt when incorporating simulation for high-stakes evaluation into their programs. High-stakes evaluations are those that potentially have significant consequences for the learners. Implementation of high-stakes testing is complex and many facilities have shied away from such testing.
The literature is limited regarding examples of successful high-stakes simulations in nursing programs. Faculty training and simulation development are key components of high-stakes testing as well as the need to create multiple parallel scenarios to ensure testing security. All scenarios should be consistent following standardized scripts and should undergo piloting testing before implementation. Inter-rater reliability should be established at the school where the testing will occur. A clear understanding of what the student needs to do to pass the evaluation should be clearly defined by multiple faculty.
The INACSL standards for participant evaluation include:
- Use of more than one evaluator
- Video recording
- Use of previously evaluated evaluation tool with known reliability and validity.
- Predetermined minimal expectations.
All faculty evaluators received training on use of the evaluation tool and were required to complete the Creighton Competency Evaluation Instrument (C-CEI) online training. Faculty collaborated on minimal expectations of student performance before the test date. Every high-stakes simulation was video recorded and evaluated by two trained evaluators who had been involved with the course throughout the semester. To ensure student preparation for an end of semester simulation for high-stakes evaluation, students were exposed to the scoring tool and evaluation process with practice simulations leading up to the test date. Students were exposed to an average of 14 similar scenarios/skills throughout the semester and were introduced to the C-CEI.
Students worked with a partner and the pair were able to participate in a practice simulation together. The student pair were assigned 3 potential scenarios and were given access to the scenario’s Electronic Health Records and list of patient medications. In addition, students were required to write tentative care plans for all three parallel scenarios although they only participated in one scenario for evaluation. The students did not know which simulation would be assigned until immediately before the evaluation simulation. Students were scored using the C-CEI on a skill, a medication administration and a change in patient condition which required a clinical judgement. In each scenario both students were responsible for a full focused assessment, VS. and any reassessment needed.
NLN templates containing expected interventions and scripted cues were used to help keep the students on tract. Students were made aware of course benchmarks and remediation policies. If students were deemed unsuccessful in their initial high-stakes simulation evaluation, the students could remediate and retest up to two additional times before an unsatisfactory course grade was received. The simulation for high-stakes evaluation process is reviewed annually by the department curriculum committee. This session provided a detailed and careful guide for implementing high-stakes testing and is highly recommended reading for any department thinking of instituting high-stakes testing.
Optimizing Nursing Response to In-Hospital Cardiac Arrest Events Using In-Situ Simulation
By Sarah Adcock and Virginia Muckler from Duke University School of Nursing in Durham, North Carolina.
Only 21% of patients who experience an in-hospital cardiac arrest (IHCA) survive to hospital discharge. Hesitant and inadequate nursing responses leading to delayed cardiopulmonary resuscitation (CPR) can decrease the chances of survival by 7-10% per minute. Knowledge and skills obtained in Basic Life Support (BLS) training significantly decreases before the two-year re-certification training is required. In-situ simulation has been used to optimize nursing response to IHCA. The aim of this study was to determine the effect of in situ training on nurses response to ICHA and to determine knowledge retention 4 months post training.
The study was based on a pre-post simulation design. Nursing staff from an acute care floor of a large academic medical center participated in a baseline 5-minute cardiac arrest simulation during their normal shift. A debriefing using the GAS method (Gather, Analyze, Summary), was conducted to review the simulation and skills. Following the debrief, learners participated in a second 5-minute cardiac arrest simulation which provided the learners additional practice. Changes in responder performance between the first and second repeat scenario were measured and compared by paired t-test. Observational checklists with multiple steps were measured for each learner.
The quality of chest compressions during the baseline and repeat simulations was also assessed using the McNemar test. Role confidence was measured by pre and post-intervention surveys and assessed using the median scores from questions with a Wilcoxon signed ranks test. A 4 month follow up in situ simulation was completed to assess knowledge retention.
Results: Comparison of Initial In Situ Simulation and Post-Debriefing
- Many tasks were completed 100% of the time for both simulations e.g. recognition of unresponsiveness, call for help, placement of backboard, CPR initiation.
- Several tasks were completed at a higher rate following debriefing e.g. Pulling bed away from the wall (75 versus 92%), start IV (67/83%), working suction set up (58/75%) although none of these results were statistically significant.
- Time to task were reduced for 15/16 tasks (hand-off to code team was not reduced) and in 11/16 tasks, the time to task reduction was statistically significant. The authors suggest that the reason some time to tasks differences were not significant, may be related to the fact that these tasks are rarely taught in BLS e.g. pulling bed away from wall and identifying documenter). The authors suggest that tasks are related to preparing the room for code team arrival.
Results – Initial repeat sim compared to 4 months follow up: 1. 7 tasks had increased time to task at 4 months (statistically significant at the P< 0.05 level) e.g. initiate ventilation, initiate CPR.
The authors concluded that in situ simulation followed by debriefing and repeat simulation reduced the time to task in 15/16 tasks however, a 4 month follow up indicated that some skills had deteriorated. They suggest repeating in situ simulation at regular intervals following initial training might be beneficial. The authors noted that some participants in the 4 months follow simulation were different from the initial sessions and as they had not received the initial training, they could account for the increased time to tasks observed in the 4 months follow up.
Dr. Kim Baily, MSN, PhD, RN, CNE has had a passion for healthcare simulation since she pulled her first sim man out of the closet and into the light in 2002. She has been a full-time educator and director of nursing and was responsible for building and implementing two nursing simulation programs at El Camino College and Pasadena City College in Southern California. Dr. Baily is a member of both INACSL and SSH. She serves as a consultant for emerging clinical simulation programs and has previously chaired Southern California Simulation Collaborative, which supports healthcare professionals working in healthcare simulation in both hospitals and academic institutions throughout Southern California. Dr. Baily has taught a variety of nursing and medical simulation-related courses in a variety of forums, such as on-site simulation in healthcare debriefing workshops and online courses. Since retiring from full time teaching, she has written over 100 healthcare simulation educational articles for HealthySimulation.com while traveling around the country via her RV out of California.