Page 58 - Read Online
P. 58

Page 388                                                 Sadagopan et al. Art Int Surg 2024;4:387-400  https://dx.doi.org/10.20517/ais.2024.34

               deep learning (DL), a subset of AI that uses neural networks structured in multiple layers of interconnected
                    [1]
               nodes . Convolutional neural networks (CNNs), a type of DL algorithm, have shown exceptional
               proficiency in analyzing medical images, and more recently, transformers have further pushed the
               capabilities of AI models. By using a self-attention mechanism to understand context, they enable parallel
               processing of data and incorporate positional encoding to maintain the order of input sequences.
               Transformers power state-of-the-art models like GPT, Llama, and Gemini, significantly advancing
               applications in text generation, classification, translation, and even extending to computer vision and speech
               recognition.

               Nearly 30% of the world’s data are produced by the healthcare sector, with 80% of the data being
               unstructured. It is estimated that the average American hospital produces 50 petabytes of data every year,
                                                      [2]
               double the size of the Library of Congress . This data-rich environment presents itself as a unique
               opportunity for AI.

               Surgical automation aims to develop devices capable of performing surgery with varying degrees of
               autonomy without the intervention of humans . DL algorithms optimize surgical strategies based on pre-
                                                       [3]
               and intraoperative patient data, leveraging predictive models to anticipate complications and adapt surgical
               plans dynamically. The application of these technologies in surgical robotics includes systems for image-
               guided navigation, autonomous instrument control, and real-time decision support.

               While the integration of AI-based frameworks in surgical robotics progresses at a guarded pace - as self-
               learning systems are still striving to achieve clinically acceptable confidence levels - mechanical advances in
               surgical devices have facilitated the integration of automation in the operating room (OR).

               Current robotic developments focus on creating a streamlined operating space; guidance cameras, robotic
               arms, and attachments can be stored in a central console and manipulated from a single site. Robotic arms
               now offer movement up to 6 or 7 degrees of freedom (DOF) and are integrated with imaging derived from
               preoperative annotations, 3D field mapping cameras, and traditional O-arms to precisely guide operative
               trajectories . These innovations have enabled high-accuracy implant placement and corrections while
                        [4,5]
               reducing radiation exposure and the need to transport patients between different pieces of equipment.
               Haptic feedback and self-stabilizing arms have further improved safety outcomes for patients. In addition to
               reducing peak force applied, instrumentation collision, and risk of undesired tissue penetration, tactile
                                                                              [6-8]
               sensation coupled with self-stabilization serves to reduce surgeon fatigue . As automated systems work
               toward more complex procedures, advances in both the hardware and software layers are necessary for
               surgical automation to materialize.

               History and stages of surgical automation
               The concepts of autonomous and robotic surgery go hand-in-hand and have significantly progressed
               together since their inception 40 years ago. The first surgical robot used in the OR was the programmable
               universal machine for assembly (PUMA) 200 robotic surgical arm in 1985, which performed precise
               neurosurgical biopsies . The 1990s introduced the PROBOT, designed for prostate surgery, and the
                                   [9]
               Robodoc  Surgical  System,  which  enhanced  hip  replacement  procedures [10,11] . With  the  continued
               introduction of robotics in the surgical setting, emphasis was placed on developing a robot capable of
               implementing the “master-slave” framework; this framework utilizes an operator controlling the
               movements of the machine from a remote location. The first iteration of the teleoperation framework was
               the ZEUS robotic system and SOCRATES, allowing for a cholecystectomy to be done on a patient located in
               France by a surgeon in New York . Expanding on the capabilities of the ZEUS robotic system, Intuitive
                                            [12]
   53   54   55   56   57   58   59   60   61   62   63