As part of the Evaluating Healthcare Simulation tools, the Clinical Learning Environment Comparison Survey (CLECS ) was created by Kim Leighton, PhD, RN, CHSOS, CHSE-A, ANEF, FSSH, FAAN. The CLECS was developed by Leighton (2015) to evaluate how well learning needs are met in the traditional and simulation undergraduate clinical environments. The CLECS was used in the landmark NCSBN National Simulation Study (Hayden et al., 2014). CLECS was modified by Leighton et al. (2021) to the CLECS 2.0 to respond to the changes in simulation delivery during the COVID-19 pandemic. The CLECS 2.0 assessed students’ perception of how well their learning needs were met in three environments: traditional clinical environment, face-to-face simulated clinical environment, and screen-based simulation environment. This HealthySimulation.com article will review the CLECS and CLECS 2.0. The CLECS has now been retired and replaced by the CLECS 2.0, which can be used to assess any 2 or more clinical learning environments for undergraduate nursing; however, it has also been used in other healthcare professional clinical environments.
The Clinical Learning Environment Comparison Survey (CLECS) Development
In the mid 2000’s, discussions were occurring in earnest as to whether simulation could replace traditional clinical hours in the nursing curriculum. At the same time, the simulation community was talking about the pros and cons of using simulation for high-stakes testing. The CLECS was developed to evaluate how well undergraduate nursing students believed their learning needs were met in the traditional clinical environment and in the simulated clinical environment, where Dr. Leighton believed that one needed to determine how similar or different the two learning environments were before decisions could be made as to whether one could replace the other. This tool was used in the landmark NCSBN National Simulation Study. (Hayden et al., 2014).
The CLECS was developed from topics identified in practice and in the simulation and nursing literature. The survey covered all aspects of clinical care, from the time a learner received their patient assignment through post-conference. A 12-member panel of experts (11 nursing faculty, and 1 expert survey developer) reviewed content and wording on the survey as well as its design. Subscales were defined by using an iterative process that included five undergraduate nursing faculty who taught in both clinical and simulation environments. The subscales underwent refinement during two pilot studies. The subscales of the CLECS were:
- Self-efficacy (4 items)
- Teaching – Learning Dyad (5 items)
- Holism (6 items)
- Communication (4 items)
- Nursing Process (6 items)
- Critical Thinking (2 items)
Reliability and Validity of the CLECS
The study was conducted at three universities: two baccalaureate programs and one associate degree program in three regions of the United States. The sample size was 422 undergraduate nursing students who had provided care to at least one simulated patient and one human patient. A confirmatory factor analysis (CFA) identified six subscales, with two items that did not align with subscales in the original CLECS study. These items were treated independently in the CLECS 2.0 study.
- First item: “Evaluating the effects of medication administration to the patient” was an example of clinical reasoning. After consultation with Dr Patricia Benner, the Critical Thinking subscale was changed to Clinical Reasoning, and that item was included in the Clinical Reasoning subscale.
- Second item: “Thoroughly documenting patient care” was determined to be subsumed by the item “Communicating with interdisciplinary team” and was removed from the CLECS 2.0 following data analysis.
The CLECS is available in Norwegian and Chinese.
Modification to the CLECS 2.0
In the time of unprecedented change during the COVID-19 pandemic, educators worldwide were forced to quickly move clinical simulation activities to a screen-based format. The original Clinical Learning Environment Comparison Survey, used in the landmark National Council of State Boards of Nursing simulation study, was revised to include screen-based simulation! The CLECS 2.0 was used to learn pre-licensure nursing students’ perceptions of how well their learning needs are met in three environments: traditional clinical environment, face-to-face simulated clinical environment, and screen-based simulation environment.
Whether you have been teaching with screen-based simulation for years, or only for 2 weeks, you can use the CLECS 2.0 to learn if your clinical teaching methods are effectively helping your students to learn, while collecting data to support your decisions related to using screen-based simulation as part of your pandemic response.
*View the LEARN CE/CME Platform Webinar Evaluating Healthcare Simulation in the Days of COVID-19 and Beyond to learn more!*
CLECS 2.0 Research Study Findings
The research question was “How well do the students believe their learning needs were met in the traditional clinical environment (TC), face-to-face simulation (F2FS) environment, and screen-based simulation (SBS) environment?” The sample size was 174; however, many surveys were not complete and were not used in the analysis. The final sample size was 113. Participants were from the US, Japan, and Canada. About half were in baccalaureate programs, 36% in associate degree programs, 6% in diploma programs, and 3% in licensed practical nursing programs.
Item scores were typically greatest for traditional clinical and lowest for SBS.
- Traditional Clinical vs F2FS: differences in only two items, favoring traditional clinical
- Traditional Clinical vs SBS: all learning needs better met in traditional clinical
- F2FS vs SBS: differences in 10 of 29 items, favoring F2FS
The CLECS validity suggests that this instrument should be able to be used no matter the type of clinical learning environment; however, if items are marked NA, then consideration must be given for the reason, especially if SBS is allowed to replace traditional clinical and F2FS activities. Two items did not align with subscales in the original CLECS study and were treated independently in the CLECS 2.0 study and covered above. Unfortunately, the sample size was too low to establish reliability. Further studies of the CLECS 2.0 are needed for reliability.
Using the CLECS 2.0
Whether you have been teaching with screen-based simulation for years, or only for 2 weeks, you can use the CLECS 2.0 to learn if your clinical teaching methods are effectively helping your students to learn, while collecting data to support your decisions related to using screen-based simulation as part of your pandemic response. When using the CLECS 2.0, please substitute the headings with the clinical and simulation environments you are studying (e.g., clinical, virtual reality, manikin-based, etc).
The CLECS 2.0 can be used to evaluate facilitators and the curriculum. While tested with undergraduate nursing students, other disciplines have also used the tool with their profession.
It is recommended that the CLECS 2.0 be completed one time during the course of the program, prior to practicum experiences. A simulationist could decide to use the tool at the end of each semester or at the end of each academic year. If the program is new or struggling, more frequent use of the tool is recommended.
The CLECS 2.0 is useful for comparing the learner’s perception of how well their learning needs were met in two or more clinical environments. Results can be used in three ways:
- Evaluation of Items and Subscales for Improvement: The goal is to create simulated clinical experiences that are equivalent to the traditional clinical experience, especially if your program is substituting clinical with simulation. Identify specific items and subscales where improvement is needed in the simulation lab. Create changes that will enhance the fidelity to become more realistic. You should also look at lower-scored items for traditional clinical and evaluate your clinical activities accordingly.
- Evaluation of the Facilitator: When evaluating the facilitator’s performance, the importance is to determine if they are meeting the learning needs of the participants. After data is collected on an annual, or semester basis, individual items, subscales, and the overall scores should be evaluated to determine the facilitator’s effectiveness. This can be done by each course facilitated and overall. Over time, results should be trended. Decisions can then be made as to whether the facilitator is performing to expectations, requires development or remediation.
- Evaluation of the Simulation Operations Personnel: Similarly, when evaluating the performance of the simulation operations personnel, it is important that they are meeting the learning needs of the participants. At the end of each survey completion period, individual items, subscales, and the overall scores should be evaluated to determine the operations personnel’s effectiveness. This can be done by each course and overall. Over time, results should be trended. Decisions can then be made as to whether the operator is performing to expectations, requires development or remediation.
There is no established method to score the CLECS 2.0. Leighton et al. suggest the simulationist focus on the lowest scoring items and subscales first, and prioritize the most important changes that should occur, as well as the ones that can be made quickly and easily. Use these low-scoring items as a needs assessment for creating a facilitator/simulation operations personnel development plan. An example is provided if you decide to consider use of the tool.
Learn More About All the Evaluating Healthcare Simulation Tools!
References:
- Hayden, J. K., Smiley, R. A., Alexander, M., Kardong-Edgren, S., & Jeffries, P. R. (2014). The NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. Journal of Nursing Regulation, 5(2), C1-S64. Retrieved from https://www.ncsbn.org/JNR_Simulation_Supplement.pdf.
- Leighton, K. (2015, January). Development of the clinical learning environment comparison survey. Clinical Simulation in Nursing, 11(1), 44-51. http://dx.doi.org/10.1016/j.ecns.2014.11.002
- Leighton, K., Kardong-Edgren, S., Schneidereith, T., Foisy-Doll, C., & Wuestney, K. (2021). Meeting undergraduate nursing students’ clinical needs: A comparison of traditional clinical, face-to-face simulation, and screen-based simulation learning environments. Nurse Educator, 46(6), 349-354. https://doi.org/10.1097/NNE.0000000000001064