Simulation Solutions from Elevate Healthcare

For years, academic and clinical educators and leaders have questioned the effectiveness of traditional clinical apprenticeship models used in undergraduate nursing education. To better determine a quantifiable level of effect, three healthcare simulation experts, an educational researcher and a librarian conducted a systematic review aimed to examine the best evidence available upon which to base decisions regarding use of traditional clinical experience with prelicensure nursing learners. Recently, their findings were published in the article, “Traditional Clinical Outcomes in Prelicensure Nursing Education: An Empty Systematic Review.”

The expert authors who contributed to this publication include Kim Leighton, Ph.D., RN, FAAN; Suzie Kardong-Edgren, Ph.D., RN, FAAN; Angela M. McNelis, Ph.D., RN, FAAN; Colette Foisy-Doll, MSN, RN, ANEF; and Elaine Sullo, MLS, MAEd (professional librarian). To conduct their systematic review, they followed guidelines from Joanna Briggs Institute and Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Thus, nine electronic databases were searched, and a full-text review was completed for 118 articles meeting the inclusion criteria.

Ultimately, they found that no studies met the criteria and reported clinical learning outcomes measured by valid and reliable instruments, resulting in an empty review. The studies reviewed were commonly self-reports of perceptions and confidence, and/or had serious methodological problems. Therefore, no sufficient evidence was found to support student learning in traditional clinical models.

According to the authors, the scope of nursing practice and patient complexity requires higher order thinking skills, ability to prioritize, and leadership in interdisciplinary care environments. They believe this review raises serious concerns about how nurse educators assess learning in traditional clinical environments.

“This study came to be because we were talking about what a high standard simulation has been held to, to demonstrate efficacy — and rightly so,” said Kardong-Edgren Ph.D. RN, ANEF, CHSE, FSSH, FAAN, an associate professor of health professions education at the MGH Institute of Health Professions. “But, had anyone ever really looked at traditional clinical with the same scrutiny? We really didn’t know and we decided to take a look ourselves and turn it into a study.”

Kardong-Edgren added that she and her colleagues knew they might be perceived as biased, and thought of having some unconscious bias toward healthcare simulation. For this reason, they wanted to ask a clinical education researcher to join them and to help avoid any known and / or potential known biases. Kardong-Edgren recalls that Angela McNelis came straight to mind, and they also decided to invite a professional librarian to help them do a thorough search of the literature.

“We worked away, reading abstracts independently and meeting periodically to agree on disputed ones. Abstracts were from 1938 to 2018, and then we read the final articles that met the study criteria,” Kardong-Edgren said. “We were not really aware until close to the end of the review that there were going to be very few articles — and then no articles.“

Kardong-Edgren shared that the authors did not expect these findings, and as their librarian explained, they had a rather unusual situation. This became an empty review, which many of the authors had never heard of. An empty review usually occurs when a topic is so new that there are not published articles on the topic yet meeting strict criteria. However, clinical education is not a new idea, the topic is merely not well-studied or outcomes have not been published.

“It is common practice to mention the articles that came closest to the study criteria, so we reviewed four papers that came closest to what we were looking for. Each was flawed in significant ways, for purposes of this review,” Kardong-Edgren explained. “So what do these unexpected results mean? First, there is a lot of research that could be done on objective clinical outcomes.”

“We need to start by acknowledging that as a profession, we have not attended to developing tools needed to assess student learning,” McNelis said. “We need more evaluation and instrument development experts to contribute to these efforts. Moreover, we need to be ready to let go of our long-held belief that the traditional clinical apprenticeship model is the gold standard for education. With valid and reliable measures, we may find that certain learning takes place outside of this model (such as in simulated learning environments) and we need to be OK with enacting that change,” McNelis said. “It is time to critically evaluate what learning environments best support competency development, and strategically use those to achieve specific learning outcomes.”

Another question that arose focused on what is clinically expected of a learner at the end of a rotation, besides surviving the clinical hours? The authors learned that, apparently, the experience itself has been sufficient for many years now.

“As McNelis stated, we need reliable and valid clinical evaluation tools, and there are almost none. In fact, healthcare simulation evaluation tools should be tested in the clinical environment, since sim stands in proxy for clinical,” Kardong-Edgren said. “The standardization of tools within a school or within the profession could be helpful.”

Interestingly, the American Association of Colleges of Nursing just published “new essentials” that revolve around competency. Kardong-Edgren believes that educators are going to have to think about this a lot more, including how to demonstrate clinical competency. McNelis adds that a central issue underpinning the new competencies is that traditional clinical rotations have high variability and unpredictability that prevent equitable experiences for students. Thus, the ability to both ensure and measure student competency is exceedingly difficult. “We suspect simulation will be involved,” Kardong-Edgren said.

Read the Full Lit Review Article Online

Lance Baily Avatar
BA, EMT-B
Founder / CEO
Lance Baily, BA, EMT-B, is the Founder / CEO of HealthySimulation.com, which he started in 2010 while serving as the Director of the Nevada System of Higher Education’s Clinical Simulation Center of Las Vegas. Lance also founded SimGHOSTS.org, the world’s only non-profit organization dedicated to supporting professionals operating healthcare simulation technologies. His co-edited Book: “Comprehensive Healthcare Simulation: Operations, Technology, and Innovative Practice” is cited as a key source for professional certification in the industry. Lance’s background also includes serving as a Simulation Technology Specialist for the LA Community College District, EMS fire fighting, Hollywood movie production, rescue diving, and global travel. He and his wife live with their two brilliant daughters and one crazy dachshund in Las Vegas, Nevada.