Page 53 - Read Online
P. 53
Page 12 of 14 Farinha et al. Mini-invasive Surg 2023;7:38 https://dx.doi.org/10.20517/2574-1225.2023.50
of the model’s use on users was shown, no impact evaluation of its use was addressed outside the study
population. Furthermore, no comparison between groups of users and non-users of TMs was undertaken,
nor an analysis of relevant clinical outcomes was performed. All these observations make it very difficult to
gather evidence supporting the decision to integrate these TMs into PN training programs.
Several fundamental flaws pervaded the reviewed studies. There was considerable heterogeneity in the
materials used to build the TMs, a lack of comparisons between the different models, and objective binary
metrics demonstrating skill improvement. Although cost was described in some studies, no cost-
effectiveness data were reported, and the level of evidence to support their use for training purposes was
weak. All these reasons preclude a recommendation for the adoption of these TMs in PN training programs.
Since TMs are a tool for delivering a metric-based training curriculum, future research should focus on the
improvement of the models, and the starting point should be the development of objective, transparent, and
fair procedural-specific metrics . A clear definition of expertise criteria, considering the performance level
[42]
of the surgeons and not the number of surgeries performed, should be a main concern. Kane’s framework
for study validation should be used, and comparisons should be made between models and between study
groups trained with and without the different TMs. Improvements will only emerge from the conjoined
efforts of surgeons, human factor engineers, training experts, and behavioral scientists .
[43]
CONCLUSION
This review substantiates the absence of well-designed validation studies on PN TMs and their inherently
low level of scientific evidence. No RCTs or impact inferences were found to support the adoption of TMs
in PN training curricula.
APPENDIX
Face validity: opinions, including of non-experts, regarding the realism of the simulator.
Content validity: opinions of experts about the simulator and its appropriateness for training.
Construct validity: (A) one group: ability of the simulator to assess and differentiate between the level of
experience of an individual or group measured over time; (B) between groups: ability of the simulator to
distinguish between different levels of experience.
Concurrent validity: comparison of the new model against the older and gold standard.
Predictive validity: correlation of performance with operating room performance.
DECLARATIONS
Authors’ contributions
Study concept and design, analysis and interpretation, drafting of the manuscript, statistical analysis,
administrative, technical or material support: Farinha RJ, Gallagher AG
Acquisition of data: Farinha RJ, Mazzone E, Paciotti M
Critical revision of the manuscript for important intellectual content: Farinha RJ, Breda A, Porter J, Maes K,
Van Cleynenbreugel B, Vander Sloten J, Mottrie A, Gallagher AG
Supervision: Gallagher AG
Farinha RJ had full access to all the data in the study and took responsibility for the integrity of the data and
the accuracy of the data analysis.
All authors participated in the study, writing, and approval of the manuscript for submission and accept
accountability, adhering to the International Committee of Medical Journal Editors requirements.