September 1, 2022By Lance Baily

Quint Leveled Clinical Competency Tool for Nursing Learners

A new Quint Leveled Clinical Competency Tool (QLCCT) is now available to nursing learners thanks to the clinical simulation website, Evaluating Healthcare Simulation. This website was created to provide healthcare simulation educators and researchers with freely available instruments (tools) that were developed for evaluating different aspects of simulation-based education (SBE). In doing just that, the Quint Leveled Clinical Competency Tool includes 10 major concepts that each have behavioral descriptors used for rating performance. This HealthySimulation.com article explains why and how the medical simulation tool was developed and shares more about the major concepts within the tool.

Ultimately, the QLCCT resulted from one educator wanting to improve upon what she saw as a weakness in the Lasater Clinical Judgment Rubric (LCJR), according to the Evaluating Healthcare Simulation website. Shelly Quint found that her clinical faculty were reluctant to score learners accurately using the very popular LCJR because of the negative verbiage used in the tool, especially for novice performance. In addition, the length of the LCJR was critiqued – however, clinical judgment is a complex concept. The website’s creators explain that breaking clinical judgment down into a one-page check box form would deny the skill required to demonstrate clinical judgment in a complex patient care event.

“The LCJR was important to nursing educators because the thing that people really go to school for, besides learning their skills, is clinical judgment,” said Suzan “Suzie” Kardong-Edgren Ph.D., RN, ANEF, CHSE, FSSH, FAAN, who developed the QLCCT. “[The LCJR] was a really important tool to develop, and it was well done. People love it, except that it’s really long.”


Sponsored Content:


She also explained that when faculty members who are adjuncts or junior faculty noticed negative verbiage used for learners, they began to dislike using the tool and wanted to give people higher scores than they deserve. Additionally, learners did not like to see negative comments made about themselves either, leading to increased dissatisfaction with the LCJR.

As a result of these needs, the QLCCT was developed specifically for use with nursing learners. Then, a serendipitous meeting of two authors in 2017 led to a funded grant for a multisite study in Washington state. This included an expert instrument development consultation. Yet, at this point in time healthcare simulation was still relatively new in the state and few schools could commit to participation in the validation and testing of the tool.

Nonetheless, work continued periodically through another two rounds of testing, with two consulting instrument experts working with a statistician. The Evaluating Healthcare Simulation website states that reliability coefficients between student academic level and student scores (0.87) and internal consistency by coefficient alpha (0.83).

“Those are the numbers that say yes, based on the research that we did, the tool really does predict if you are a sophomore, a junior, or a senior-type nursing student (0.83),” explained Kardong-Edgren. “Then the internal consistency means that between the items within the tool, they really do measure what they’re supposed to measure at the level they’re supposed to measure them. (0.87)”


Sponsored Content:


Today, of the 10 major concepts in the QLCCT tool, each has between one and four behavioral description lines; each one is rated 1 (novice nurse) to 4 (graduate nurse). The lowest score on the QLCCT would be a 10, and the highest (usually only achieved by a graduating RN) would be 40.

After observing learners in the healthcare simulation or clinical setting, the instructor marks each behavioral description line with an X. A learner performing each behavior at a different level within the concept is not unusual. The final score for a concept will be the lowest scored item. This prevents grade inflation and allows both the instructor and the students to see where improvement and progression are needed.

“I think [the QLCCT] helps us all get on the same page with some validated tools that help us describe the terms that we say we want students to be able to do or use, or be able to demonstrate,” Kardong-Edgren said. “When we have new faculty or adjunct faculty who might be in simulation or might be in clinical and they don’t know what are students supposed to look like now – especially if they’re coming in brand new without any training – the tool helps get people up to speed on what is it we’re trying to do with these students. These tools help us give a printed picture of what we’re asking people to be able to see and do the way the, both of the tools.”



The way both the LCJR and the Quint are developed, the highest level column is either a student who’s about to graduate with the Quint or a practicing nurse with the LCJR. Those things that are below that are things that we’re striving to achieve as we get better, go through school, learn more things, and theoretically do more clinical, Kardong-Edgren noted. She said at this point learners begin to understand what the job is and how their brains are supposed to be working when they’re taking care of these people. Thus, these tools help “everybody see where they’re going.”

More About Evaluating Healthcare Simulation

Together, Evaluating Healthcare Simulation was established by Kim Leighton, Ph.D., RN, CHSOS, CHSE, ANEF, FSSH, FAAN, Gregory E. Gilbert, EdD, MSPH, PStat, Vickie Mudra, ACC, CPC, C-IQ, ELI-MP, MPH, Colette Foisy-Doll, RN, MSN, CHSE, ANEF, Patricia Ravert, Ph.D., RN, CNE, ANEF, FAAN, Cynthia Foronda, Ph.D., RN, CNE, CHSE, ANEF, Eric B. Bauman, Ph.D., FSSH, PMHNP-BC, APNP, Jill S. Sanko, Ph.D., MS, ARNP, CHSE-A, FSSH, Karina Gattamorta, Ph.D., EdS, Ilya Shekhter, MS, MBA, CHSE, and David Birnbach, MD, MPH.

The creators of this healthcare simulation website believe that going well beyond satisfaction and confidence when evaluating SBE as pedagogy is imperative. They also believe that using instruments with established reliability and validity when evaluating outcomes is equally important. Their work, and that of other healthcare simulation researchers, has led to five instruments that have been found useful for those working in education and clinical environments. In the 4.5 years since its inception, 8,420 unique tools have been downloaded from Evaluating Healthcare Simulation spanning 89 countries!

Download the Quint Leveled Clinical Competency Tool Here


Sponsored Content: