Article Text
Abstract
Background Virtual simulators have been widely implemented in medical and surgical training, including ophthalmology. The increasing number of published articles in this field mandates a review of the available results to assess current technology and explore future opportunities.
Method A PubMed search was conducted and a total of 10 articles were reviewed.
Results Virtual simulators have shown construct validity in many modules, successfully differentiating user experience levels during simulated phacoemulsification surgery. Simulators have also shown improvements in wet-lab performance. The implementation of simulators in the residency training has been associated with a decrease in cataract surgery complication rates.
Conclusions Virtual reality simulators are an effective tool in measuring performance and differentiating trainee skill level. Additionally, they may be useful in improving surgical skill and patient outcomes in cataract surgery. Future opportunities rely on taking advantage of technical improvements in simulators for education and research.
- Medical Education
- Treatment Surgery
Statistics from Altmetric.com
Introduction
Surgical simulators represent an important step in narrowing the gap between clinical practice and simulator practice. Surgical education has used Halstedian training as the basis for apprenticeship-style learning throughout the ages.1 Anatomical relationships are vividly demonstrated along with the consequences for each action. Unfortunately, the patient can be an unforgiving teacher. There are real-time pressures during the procedure and patient safety must come first. Safety and efficacy cannot be sacrificed for educational purposes.2
Surgical simulators have proliferated as computing power has increased according to Moore's Law.1 Interactive platforms have been employed to better demonstrate complex anatomical relationships.3–5 Stereoscopy can produce a more realistic three dimensional (3D) perspective, and more advanced systems can even imitate dynamic perspective with user head/eye movements. Haptic feedback can simulate tactile characteristics. All of these simulation components have been employed for a variety of surgical simulators, with application for laparoscopic, endoscopic, and microsurgical training.6–8
The need for developing surgical simulators for cataract surgery education is driven by a number of forces. The Accreditation Council for Graduate Medical Education (ACGME) is in the process of mandating milestones that will monitor the development of surgical competencies in training of residents in surgical specialties.9 Surgical training using simulators have been adopted by many surgical specialties to provide training in a controlled environment and also to provide objective assessment of skills and track progress.
Three cataract surgery simulators have been studied: EYESi (VRmagic), PhacoVision (Melerit Medical), and MicrovisTouch (ImmersiveTouch). Recently published articles have compared these simulators in terms of technical specifications.10 ,11 Table 1 summarises a comparison of these simulators.
All three simulators share the ability of producing 3D stereoscopic images that can be seen through the microscope. They also share the ability to simulate most steps of the cataract surgery. To date, simulators lack the ability to demonstrate suturing of the corneal incision. Both MicrovisTouch and PhacoVision simulators are becoming commercially available, though the number of published articles for both devices is very limited. At this time, the EYESi simulator is the only device that has been validated in peer reviewed publications and is available in the market for cataract and vitreoretinal surgery training.12–23 Studies on MicrovisTouch and PhacoVision are limited.
Among the reviewed simulators, the MicrovisTouch simulator is the only device that has incorporated a haptic (tactile) feedback interface. In a recently published survey a majority of the participating ophthalmologists agreed that the integration of the tactile feedback could provide more realistic operative experience.24 Furthermore, the Immersive touch simulator is the only device that provides a full virtual experience, which includes the instruments, and the head and the eye of the virtual patient. In contrast, both PhacoVision and EYESi devices have a physical head and eye, which are immobile.
This review discusses studies currently available that have evaluated the use of simulators in cataract surgery training. As the state of current technology evolves, simulators must be explored for further research and development.
Methodology
A PubMed search was conducted using the following keywords: virtual simulator, virtual reality, cataract, phacoemulsification, education, training, and assessment. A total of 38 articles were evaluated and 10 were selected based on the exclusion of non-ophthalmic and non-cataract surgery articles. Only papers that provided adequate results to prove the validity of the simulator and the effect of the simulator on training and education were included.
Construct and concurrent validity studies
Five studies focused on proving that the simulator objectively differentiated between experienced and novice surgeons in terms of surgical proficiency. All studies examined the validity of the EYESi simulator except for one that evaluated the MicrovisTouch simulator (table 2).
Banerjee et al11 investigated the concurrent validity of capsulorhexis performance metrics (duration, number of capsular grabs per completed capsulorhexis, and circularity of the capsulorhexis) between the MicrovisTouch simulator (ImmersiveTouch) and live surgeries. From the 12 postgraduate year-4 residents enrolled, only eight were scored for duration and number of grabs due to insufficient simulator data. Additionally, only four were scored for the circularity of the capsulorhexis due to inadequate clarity of their cataract surgery videos. The study did not provide a statistically significant result to correlate between the simulator and live surgeries in the duration and number of capsular grabs metric during capsulorhexis. On the other hand, it showed a significant concurrent validity of the circularity of the capsulorhexis metric (p<0.05).
A study conducted by Mahr and Hodge15 showed construct validity of anterior segment anti-tremor and forceps training modules of the simulator EYESi. A total of 15 participants were divided into two groups (12 residents and three experienced surgeons). The experienced surgeons showed a statistically significant better total score, total task time and instrument-in-eye time parameters for both modules. In addition, experienced surgeons performing the anti-tremor task had 76% more precise surgical outcomes as measured by the out-of-tolerance percentage with more consistent scores.
Privett et al16 evaluated the construct validity of capsulorhexis training modules of the simulator EYESi. Twenty-three participants were divided into two groups: 16 medical students and residents; seven experienced surgeons. The experienced surgeons showed statistically significant better scores on both the easy and medium levels of the module. The study demonstrated significant construct validity for the capsulorhexis module.
Selvander and Asman17 evaluated construct validity of the capsulorhexis, hydro-manoeuvres, phacoemulsification, navigation, forceps, cracking and chopping training modules of the EYESi simulator. Twenty-four participants were divided into two groups: 17 medical students and residents; seven experienced surgeons. Three trials were performed by each participant, with a video recording of the second trial evaluated for capsulorhexis, hydro-manoeuvres, and phacoemulsification modules by the modified Objective Structured Assessment of Surgical Skills (OSATS) and OSA of Cataract Surgical Skill (OSACSS) tools. The experienced surgeons showed a statistically significant better score on the simulator for the capsulorhexis, navigation and forceps modules, with less obvious score differences noted in the phacoemulsification, cracking and chopping modules. No difference in overall score on the simulator was found on hydro-manoeuvres. However, by using alternative tools for assessment, OSATS and OSACSS, significant difference between the two groups in the capsulorhexis, hydro-manoeuvres and phacoemulsification was demonstrated. The study showed significant construct validity for the previously mentioned modules with the hydro-manoeuvres requiring the video evaluation tool. The same investigators were able to establish the concurrent validity for the capsulorhexis module using the OSACSS and OSATS scoring systems in a study that was composed of 35 students.18
Training curriculum and surgical outcomes studies
Six studies were performed using the EYESi simulator to examine the outcomes of implementing the simulator in ophthalmology training programmes and its effect on the acquisition of the skill (table 3).
Selvander and Asman18 studied the learning curve of the capsulorhexis and cataract navigation modules of the EYESi surgical simulator. A total of 35 medical students were divided into two groups: group A (n=17) who underwent 10 iterations of the cataract navigation training module followed by two iterations of the capsulorhexis module; group B (n=18) who underwent 10 iterations of the capsulorhexis module followed by two iterations of the cataract navigation training module. In both groups the first 10 iterations were scored to observe the learning curve. The study showed a significant improvement (p<0.05) and plateau for the cataract navigation training module (plateau at third iteration), time with instrument insertion (plateau at third iteration for both modules), and injured cornea area (plateau at sixth iteration on capsulorhexis and seventh iteration for the cataract navigation training module). On the other hand, there were no significant improvements or plateau observed on either the overall scoring of capsulorhexis or the injured lens area, even though the 10th iterations score was significantly better than the first one.
Feudner et al19 demonstrated that the use of the EYESi virtual simulator helped improve the capsulorhexis wet-lab score in a study of 63 participants (31 medical students and 32 residents) distributed randomly into a virtual reality group and a control group. Both groups completed a capsulorhexis on a porcine wet-lab at the beginning and at the end of the study. In between, the virtual reality group practised two training trials on the EYESi simulator. The capsulorhexis procedure in the wet-lab was video recorded for all participants (n=372). The videos were viewed by a masked observer, who evaluated the performance of each participant. Inter-rater reliability assessment was determined by correlating the assessment of randomly selected videos with the assessment of the observers. The virtual reality group showed more consistent and significant improvement of the median wet-lab capsulorhexis score in comparison to the control group (+3.67 vs +0.33).
In another study Belyea et al20 retrospectively compared the performance of the non-simulator experienced residents trained by a single attending ophthalmologist with a newer group of residents who had been trained by the same attending ophthalmologist with the EYESi simulator. Forty-two participants (simulator group 17 vs non-simulator group 25) completed a total of 592 phacoemulsification surgeries and were evaluated for mean value of phacoemulsification time, percentage phacoemulsification power, adjusted phacoemulsification time, complication rate, and complication grade (1–4 based on the severity of the complication). For all the previously measured parameters, the simulator group showed significantly better results, with the exception of the complication rate and complication grade in which there were no significant differences. The study also showed that the simulator group had a significantly lower rate of complications in the cases performed during the second half of the year compared to those performed in the first half of the year.
Pokroy et al21 also investigated retrospectively the incidence of posterior capsule rupture and operation time for the first 50 phacoemulsification procedures of the non-simulator trainees (before 2007–2008) and simulator trainees (after 2009–2010). A total of 20 trainees (10 in each group) performed 1000 cases. Both groups showed no significant difference in the number of posterior capsular ruptures and the overall operation time for the first 50 cases. However, the simulator group showed a significantly shorter median operating time in cases 11–50 (34 min for the simulator group vs 38 min in the non-simulator group).
Recently, Baxter et al22 evaluated the outcomes of implementing a 2 year intensive cataract surgery training programme on a total of three residents. The programme comprised 50 h virtual reality surgical simulator training (EYESi) and performing a minimum of 250 cataract surgeries as a primary surgeon within the 2 year period of the programme (150 cases in the first 6 months). The study showed that the average complication rate for cataract surgery in the first 6 months of the training (after about 150 cases for each trainee) was 1%, and after 1 year (>250 cases) the average was 0.66%, significantly lower than the complication rates previously published in the literature (3.77–7.17%; p<0.05).
Saleh et al23 conducted a study that aimed to investigate the efficiency of implementing a training programme that had been established by the International Forum of Ophthalmic Simulation (IFOS) using the EYESi simulator. A total of 16 novice surgeons were scored on three single handed tasks (cataract navigation training, cataract anti-tremor training, and capsulorhexis training) and one bi-manual task (cataract cracking and chopping training) on both entry and exit of the programme. The maximum score for the previously mentioned tasks was 400 (a maximum of 100 on each task). A sequence of different tasks was also categorised and included in the structure of the programme. The programme included at least 6 h of a consultant's supervision of the trainee. The comparison between the entry and exit scores showed a significant improvement of the median scores for all tasks in addition to the overall score (p<0.05).
Discussion
The fundamental step in implementing any simulator device in training or research studies is to be able to establish a construct validity and/or concurrent validity. Although each validity test is uniquely defined, most simulator studies define construct validity as the ability of the simulator to differentiate between the performances of the expert surgeons and the novice surgeons.11 On the other hand, concurrent validity evaluates the similarity between the surgeon's performances in real surgery versus simulation.11 In other words, the concurrent validity compares the surgeon's skill while the construct validity correlates the surgeon's performance in simulation compared to their level of experience.
An early study that described the PhacoVision simulator showed a positive feedback of the participants in a qualitative survey.25 This feedback can provide proof of the realism of the simulator in comparison to the real surgery. However, it may not be considered a proof for validation. On the other hand, the only published study for the MicrovisTouch simulator by Banerjee et al11 established the concurrent validity in the circularity of the capsulorhexis module. However, the MicrovisTouch study did not evaluate half of the performed procedure by the active participants due to a clarity issue of the video recordings, mandating further investigations in the future to strengthen the available results.
The majority of the published studies that evaluated construct validity involved the EYESi simulator—a reflection of its availability on the market as a vitreoretinal surgery simulator and later a cataract surgery simulator. Two published studies mutually strengthened construct validity of the forceps training module of EYESi simulator.15 ,16 Similarly, two studies proved the construct validity of the capsulorhexis module.16 ,17 Mahr and Hodge15 also established the construct validity for the anti-tremor training module. The remaining modules, which include hydro-manoeuvres, phacoemulsification, navigation, cracking and chopping training modules of EYESi simulator, were investigated by Selvander and Asman17; the study was the first to show construct validity of the navigation module. On the other hand, they found that using the regular scoring system did not provide significant differences between the expert and novice surgeons to show construct validity of the remaining modules. However, using a video scoring system (OSATS and OSACSS) showed a significant difference in the construct validity of the capsulorhexis, hydro-manoeuvres, and phacoemulsification modules. This may indicate the need for further improvement in the regular scoring system for hydro-manoeuvres, phacoemulsification, cracking and chopping training modules of the EYESi simulator. Furthermore, investigators may need to include the video scoring system in their future studies as an additional validation tool, especially for the hydro-manoeuvres module.
The findings established through the validity studies of the EYESi simulator have built the foundation to study the outcomes of incorporating the simulator in an ophthalmology training programme. This has led to the introduction of many studies that focused on comparing the outcomes of supplementing the training programme with the simulator with the outcomes of using only the previous structure of the programme. Simulators can improve the learning curve, increasing trainee confidence and the acquisition of skills, and thus leading to the ultimate goal of improving patient outcomes.
Given the wide distribution of the simulators among training programmes, the IFOS has been established to share the experiences and help utilise simulators in a more efficient way.23 In one study, Saleh et al23 studied the efficiency of implementing this training programme using a series of modules already validated in previously published studies. The programme showed a significant improvement in the median scores for all tasks, in addition to the overall score of the trainees.
Selvander and Asman18 recorded the pattern of the learning curve for capsulorhexis and cataract navigation modules. While it did not show a plateau on the overall scoring of capsulorhexis or the injured lens area, the study showed a significant plateau on different parameters and modules such as the cataract navigation training module, time with instrument insertion, and injured cornea area. This plateau helped estimate the number of trials the novice surgeon would need on average to overcome the learning curve.
In addition to their incorporation in training and education, simulators provide a safe tool to evaluate the effect of the surgical environment on the performance of the trainee. For ethical reasons involving issues of patient safety, most of these studies are inapplicable in the real patient surgical setting. Simulators have been used to evaluate the role of fatigue,26 stereoacuity,27 surgeon distraction,28 non-dominant hand use,15 ,29 and surgeon use of β-blockers30 on surrogate surgical performance.
Future opportunities for surgical simulators
Collaboration between clinicians and simulation developers will further advance this technology. Improving the scoring system of the non-validated modules, incorporation of the tactile (haptic) feedback in the tactile-deficient simulator, and adding suturing of the incision may prove to be helpful additions in surgical simulation technology. Furthermore, creation of a tutor mode that provides continuous feedback, perhaps with a virtual trainer highlighting actions that may lead to complications, may be useful teaching aids.31 Simulators may also be used as an examination tool to assess competency at different stages of training.
The ultimate goal of simulator use is to improve patient outcomes by trainees. While a retrospective review of complication rates and performance in training before and after the acquisition of the simulator may be useful, larger scale studies are needed to provide stronger evidence of efficacy in surgical training. Comparing different simulators and studying their effect in comparison to the wet-lab environment is another potential area of study.
Conclusion
Virtual simulator use is a safe and effective tool in measuring the performance of trainees and differentiating their skill level. Additionally, it is useful in improving the learning of techniques by trainees and will ultimately lead to better patient outcomes in cataract surgery. Future education and research opportunities must take advantage of the technical improvements in simulation technology.
Acknowledgments
We would like to acknowledge Dr Deepak Edward for some of the earlier work, which led to this review.
References
Footnotes
-
Contributors All authors were involved with the conception and design or analysis and interpretation of the data, as well as drafting the article or revising it critically for important intellectual content and final approval of the version to be published.
-
Funding King Khaled Eye Specialist Hospital, National Eye Institute: 5R42EY018965-03.
-
Competing interests PB's work was supported in part by ImmersiveTouch, Inc where he holds an appointment.
-
Provenance and peer review Commissioned; externally peer reviewed.