Page 8 - Read Online
P. 8
Page 2 of 4 Checcucci et al. Mini-invasive Surg 2021;5:49 https://dx.doi.org/10.20517/2574-1225.2021.98
Notwithstanding the initial encouraging findings, this approach revealed not to be accurate enough due to
the high heterogeneity of colors displayed during the endoscopic view and the absence of clear
intraoperative landmarks for providing a precise spatial orientation along the three main axes. In the
current year, we began to explore the potential applications of artificial intelligence (AI) for urologic in vivo
surgery. Our new approach consists of a two-step automatic system that aligns a 3D virtual model of the
patient’s prostate with the endoscopic 2D images at the end of the extirpative phase during RARP. For each
of the two steps, a specific convolutional neural network (CNN) was developed. Briefly, the first CNN
outputs catheter location and z rotation by identifying the anchor point. The second CNN returns antero-
posterior rotation on the x axis. Their combined results allow to perform the actual overlay rate. Our
findings are promising and were presented during the last edition of Virtual EAU 2021, showing that the
introduction of CNNs allows to correctly overlay 3D virtual images in a completely automatic manner. The
correct identification of extracapsular extension at the level of the resection bed can potentially bring a
reduction in positive surgical margins rates, with a subsequent oncological advantage for the patients with
locally advanced disease .
[13]
[14]
As shown in the recent literature, the application of AI in uro-oncology has gained wide diffusion ; despite
its use during live surgeries, it is still limited to anecdotical experiences . The intraoperative support of
[15]
machine learning (ML) for autonomous camera positioning was promisingly explored analyzing data
obtained by instrument kinematics, laparoscopic video, and surgeon eye-tracking . On the contrary, the
[15]
application of ML to more complex tasks (e.g., suturing, knot-tying, and tissue dissection) is more difficult
[16]
to reach. As recently summarized by Ma et al. , a robot must be able to perform three different actions to
complete these surgical tasks: it must “see” (vision recognition), “think” (task planning), and “act” (task
execution).
Therefore, even if this field of research seems to be the most appealing, we need to think of the potentiality
of AI driven surgery, looking to a wider horizon [16,17] .
Starting from preoperative setting, as shown by Auffenberg et al. , specifically developed ML algorithms
[18]
can help the surgeon in selecting candidates for the different treatments (e.g., active surveillance, radical
prostatectomy, radiation therapy, and androgen-deprivation therapy) by analyzing data from the electronic
health records.
Furthermore, this technology may also be applied for improving surgical training: by extracting data from
the da Vinci console, dedicated ML can be developed to automatically analyze the trainees’ movements,
[19]
providing a personalized evaluation highlighting the strongest and weakest technical abilities . As well, the
application of ML-based analysis to automate segmentation of anatomical landmarks during 12 different
surgical steps during RARP showed, with respect to human segmentation, that the ML-based model
annotated better the boundaries .
[20]
Looking to the future, the further development of robotic technology towards automation will enhance
[21]
surgical outcomes by improving the workflow and minimizing repetitive or mundane tasks .
However, the most challenging aspect of this technology is the ability to reproduce the sophistication of
human movements and therefore to reach complete autonomy.