Page 27 - Read Online
P. 27
Body et al. Art Int Surg 2022;2:186-94 https://dx.doi.org/10.20517/ais.2022.28 Page 190
The Pittsburgh group analysed its RPD cases over a nine-year period, with three phases of surgeons:
(1) those with no mentorship or curriculum; (2) those with mentorship but no curriculum; and (3) those
with mentorship who underwent the robotic curriculum. The surgeons in the third category, despite having
performed fewer operations, had shorter operative times, with less blood loss, lower transfusion rate, fewer
complications and a shorter length of hospital stay .
[49]
Despite these examples, more data are required to address the optimal metric for the assessment of the
minimum number of cases needed to accomplish competency and acceptable outcomes. We would
advocate routine and regular video assessment of procedures to demonstrate surgical proficiency in
addition to patient outcome analysis. Multi-institutional, international registries are good sources of data
and participation should be encouraged.
Machine learning in robotic training
Artificial intelligence (AI) can be defined as the development of computer systems to be able to perform
tasks that normally require human intelligence. Machine learning (ML) is a subset of AI in which computer
[50]
systems can learn and adapt using statistical models to analyse data patterns . The applications of both AI
and ML are exponentially increasing in medicine, yet their use in surgery is still in its infancy. However, the
arrival of more complex algorithms and higher-powered computing has recently allowed for an increase in
the use of ML in surgery [51,52] , and ML is expected to revolutionise the operating theatre and surgical
training. The increasingly widespread use of MIS and robotic platforms in surgery has the potential to
provide a rich dataset of surgical videos for analysis. However, the challenges of applying ML algorithms to
surgical video analytics are worth noting and have limited their effective use to date . There is variability in
[53]
image quality, movement and smoke artefacts, and changing objects within the visual field and anatomical
structures are not clearly visualised; they lie within and are covered by other tissues. Despite these inherent
difficulties, more recently, groups have evaluated the use of ML algorithms in surgery with encouraging
results. However, with small datasets, the accuracy of current algorithms must be interpreted with caution.
Standardising robotic training and robotic procedures will reduce some of this intra-operative variability,
thus enhancing the accuracy of ML tools. The use of automated video analytics can improve surgical
education and enable self-directed learning both within and outside the operating room. The ethics of using
surgical videos for data analysis and the creation of ML algorithms remain an important issue for discussion
by international surgical bodies. Data protection laws provide a framework to prevent the misuse of data by
healthcare providers and technology companies, but with the continued evolution of AI in surgery, these
regulatory organisations must adapt to ensure that collaborations between surgeons and the industry gain
appropriate patient consent.
The groundwork for ML for operative interpretation has been surgical phase recognition. In this task, a
dataset of MIS operative videos is input into the system that is being trained. Experienced surgeons annotate
these videos with the operative phase or procedural step. These videos are then input as another dataset
known as labelled training data to create the ML model. The created model aims to automatically assign the
surgical phase in further operative videos . The more data the model receives, the more accurate the
[54]
created algorithm. Most work published to date has looked at distinct operative phases during laparoscopic
[55]
cholecystectomy, given that there are several large video datasets available to access . Jin et al. created an
ML algorithm after analysing 107 cholecystectomy videos. This successfully assigned the seven operative
steps of the procedure with 92.4% accuracy . Other researchers have similarly created algorithms that have
[56]
[57]
correctly recognised the surgical phases in sleeve gastrectomy (mean accuracy 82% ± 4%) , cataract surgery
(mean accuracy 96.5%) , laparoscopic sigmoidectomy (mean accuracy 91.9%) and endoscopic myotomy
[59]
[58]
[60]
[mean accuracy 87.6% (95%CI: 87.4%-87.9%)] . By combining automated surgical phase recognition and
the routine recording of an individual surgeon or trainee’s procedures, operative efficiency and constructive