﻿<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-id journal-id-type="nlm-ta">Plast Aesthet Res.</journal-id>
      <journal-id journal-id-type="publisher-id">PAR</journal-id>
      <journal-title-group>
        <journal-title>Plastic and Aesthetic Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2349-6150</issn>
      <publisher>
        <publisher-name>OAE Publishing Inc.</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.20517/2347-9264.2025.121</article-id>
      <article-categories>
        <subj-group>
          <subject>Review</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Smart hands: robotic systems as surgical collaborators in plastic and reconstructive surgery</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <name>
            <surname>Novotny</surname>
            <given-names>Matthias Johannes</given-names>
          </name>
          <xref ref-type="corresp" rid="cor1" />
          <contrib-id contrib-id-type="orcid">https://orcid.org/0009-0005-8214-7288</contrib-id>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Fast</surname>
            <given-names>Anna</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Radtke</surname>
            <given-names>Christine</given-names>
          </name>
          <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-6877-5038</contrib-id>
        </contrib>
      </contrib-group>
      <aff id="I">Department of Plastic, Reconstructive and Aesthetic Surgery, Medical University of Vienna, Vienna 1090, Austria.</aff>
      <author-notes>
        <corresp id="cor1">Correspondence to: Dr. Matthias Johannes Novotny, Department of Plastic, Reconstructive and Aesthetic Surgery, Medical University of Vienna, Vienna 1090, Austria. E-mail: <email>matthias.novotny@icloud.com</email></corresp>
        <fn fn-type="other">
          <p>
            <bold>Received:</bold> 20 Nov 2025 | <bold>First Decision:</bold> 8 Jan 2026 | <bold>Revised:</bold> 15 Jan 2026 | <bold>Accepted:</bold> 12 Feb 2026 | <bold>Published:</bold> 13 May 2026</p>
        </fn>
        <fn fn-type="other">
          <p>
            <bold>Academic Editor:</bold> Raffaele Rauso | <bold>Copy Editor:</bold> Ting-Ting Hu | <bold>Production Editor:</bold> Ting-Ting Hu</p>
        </fn>
      </author-notes>
      <pub-date pub-type="ppub">
        <year>2026</year>
      </pub-date>
      <pub-date pub-type="epub">
        <day>13</day>
        <month>5</month>
        <year>2026</year>
      </pub-date>
      <volume>13</volume>
      <elocation-id>15</elocation-id>
      <permissions>
        <copyright-statement>© The Author(s) 2026.</copyright-statement>
        <license xlink:href="https://creativecommons.org/licenses/by/4.0/">
          <license-p>© The Author(s) 2026. <bold>Open Access</bold> This article is licensed under a Creative Commons Attribution 4.0 International License (<uri xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</uri>), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</license-p>
        </license>
      </permissions>
      <abstract>
        <p>Robotic systems, artificial intelligence (AI), and augmented reality (AR) are increasingly reshaping plastic and reconstructive surgery by enhancing precision, reproducibility, and surgical decision-making. While conventional robotic platforms such as the Da Vinci system have demonstrated clear benefits in minimally invasive surgery, their application in microsurgery has remained limited due to instrument size and insufficient submillimetric control. Recently developed microsurgical platforms, including the Symani® Surgical System (Medical Microinstruments, Pisa, Italy) and the MUSA® system (MicroSure B.V., Eindhoven, The Netherlands), address these limitations through motion scaling, tremor suppression, and compatibility with open-field reconstruction. Early clinical studies report successful lymphatic and microvascular anastomoses, improved ergonomics, and reduced surgeon fatigue, marking a significant step toward routine robot-assisted microsurgery. In parallel, advances in AI and machine learning enable data-driven surgical planning, perforator mapping, flap selection, complication prediction, and automated documentation. Large language models further support clinical workflows through structured documentation and patient communication, while AR and virtual reality enhance anatomical orientation, intraoperative navigation, and surgical training. Despite these advances, challenges remain, including heterogeneous data quality, algorithmic bias, limited interoperability, and evolving regulatory frameworks. Addressing these issues is essential to ensure safe and equitable implementation. Future developments are expected to converge into an integrated, AI-augmented surgical ecosystem combining preoperative planning, robotic execution, and outcome-based learning. Rather than replacing surgical expertise, these technologies aim to augment human skill and support a more personalized and efficient reconstructive practice.</p>
      </abstract>
      <kwd-group>
        <kwd>Robotic-assisted microsurgery</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>augmented reality</kwd>
        <kwd>surgical planning</kwd>
        <kwd>microsurgical robotics</kwd>
        <kwd>large language models</kwd>
		<kwd>precision surgery</kwd>
		<kwd>reconstructive surgery</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec1">
      <title>INTRODUCTION</title>
      <sec id="sec1-1">
        <title>Background</title>
        <p>Plastic and reconstructive surgery has historically been shaped by a blend of manual precision, anatomical expertise, and creative problem-solving. From the foundational work of Gillies and Millard to modern advances in microsurgical free flap reconstruction and lymphatic surgery, the specialty has relied on the surgeon’s technical skill and tactile feedback to manipulate tissues at the finest level<sup>[<xref ref-type="bibr" rid="B1">1</xref>]</sup>. As surgical procedures become increasingly complex and patient expectations rise, the demand for enhanced precision, reduction of surgeon fatigue, and reproducibility has driven the field to explore new technological horizons.</p>
        <p>One of the most transformative developments in modern surgery has been the development of robotic surgical systems. The Da Vinci Surgical System, first introduced more than two decades ago, advanced minimally invasive surgery by offering high-definition three-dimensional (3D) visualization, tremor reduction, and improved instrument articulation. It is now widely established in urologic, gynecologic, and general surgical procedures<sup>[<xref ref-type="bibr" rid="B2">2</xref>]</sup>. However, despite these successes, the Da Vinci platform has achieved only limited integration into plastic and reconstructive surgery. Its large footprint, high costs, and relatively coarse instrument scale restrict its applicability for microsurgical procedures, which require delicate tissue handling and fine motor precision<sup>[<xref ref-type="bibr" rid="B3">3</xref>]</sup>. Nevertheless, the system has demonstrated the potential benefits of robotic assistance - improved control, decreased surgeon fatigue, and novel opportunities for visualization and access - and thereby highlights the need to explore next-generation robotic technologies tailored to the unique demand of reconstructive surgery.</p>
        <p>Following these early developments, newer robotic systems - such as the Symani® Surgical System (Medical Microinstruments, Italy) and MicroSurgical Assistant (MUSA, MicroSure, Netherlands) - have been specifically designed for microsurgical applications. These externally mounted systems provide submillimetric motion scaling, real-time tremor suppression, and compatibility with traditional microsurgical workflows. They enable high-precision tasks, such as vascular and lymphatic anastomoses, that are beyond the resolution of conventional robotic platforms<sup>[<xref ref-type="bibr" rid="B4">4</xref>,<xref ref-type="bibr" rid="B5">5</xref>]</sup>. Unlike other surgical robotic devices, which are optimized for cavity-based surgeries, these next-generation systems are tailored for superficial, open-field reconstructions, making them ideal for plastic and reconstructive surgery applications<sup>[<xref ref-type="bibr" rid="B6">6</xref>]</sup>.</p>
        <p>Concurrently, advances in artificial intelligence (AI) have introduced new tools for data-driven surgical planning, risk stratification, and intraoperative decision support. Machine learning (ML) algorithms trained on large clinical datasets can assist with tasks such as flap design, perforator identification, and outcome prediction<sup>[<xref ref-type="bibr" rid="B7">7</xref>]</sup>. Large Language Models (LLMs) such as Generative Pre-trained Transformer (GPT)-4 bring natural language understanding to the surgical setting, supporting intraoperative consultation, automated documentation, and patient-specific communication<sup>[<xref ref-type="bibr" rid="B8">8</xref>]</sup>. These AI tools, when integrated with robotic platforms, promise the creation of intelligent surgical ecosystems - environments where human expertise is enhanced by real-time computational insight<sup>[<xref ref-type="bibr" rid="B9">9</xref>]</sup>.</p>
        <p>Plastic surgery, with its emphasis on individualization, fine motor control, and multidisciplinary collaboration, stands at a pivotal stage in this technological evolution. As robotic systems progress from concept to clinical implementation and AI becomes more accessible, the specialty must decide how to responsibly integrate these tools into education, clinical care, and research. While platforms such as Da Vinci have proven the viability of robotics in high-volume surgeries, microsurgical platforms (e.g., Symani® and MUSA) demonstrate that plastic surgery can develop its own robotic identity, tailored to its unique needs<sup>[<xref ref-type="bibr" rid="B4">4</xref>-<xref ref-type="bibr" rid="B6">6</xref>]</sup>.</p>
        <p>This article is part of the <italic>Plastic and Aesthetic Research</italic> Special Issue “A New Frontier in Plastic Surgery - From Robotics to LLMs: An Expanding AI Landscape.” It explores the historical trajectory, current status, and future direction of robotic and AI-based technologies in plastic and reconstructive surgery, emphasizing their potential to reshape operative precision, surgical training, and patient engagement in the years ahead.</p>
      </sec>
      <sec id="sec1-2">
        <title>Methods</title>
        <p>This article draws on a narrative literature review supplemented by an expert perspective on clinical reports, feasibility studies, and technological developments in surgical robotics and AI. Publications indexed in PubMed and Scopus between 2015 and 2025 were examined, with a focus on robot-assisted microsurgery, AI-guided surgical navigation, LLMs for clinical documentation, and augmented reality (AR) as an adjunct in operative visualization. In addition, case studies from pioneering institutions using the Symani® Surgical System, MUSA, and early AI-integrated platforms were assessed.</p>
      </sec>
    </sec>
    <sec id="sec2">
      <title>AI-DRIVEN SURGICAL PLANNING AND OUTCOME PREDICTION</title>
      <sec id="sec2-1">
        <title>ML in preoperative decision-making</title>
        <p>The preoperative phase of plastic and reconstructive surgery is critical in determining the success of the treatment. ML algorithms can support this stage by analyzing complex, patient-specific risk profiles, the selection of suitable reconstructive techniques, and the anticipation of intraoperative complications.</p>
        <p>Hassan <italic>et al</italic>. (2023) developed a predictive model to assess skin flap necrosis after mastectomy using clinical variables. The model attained an Area Under the Curve (AUC) of 0.70, indicating moderate predictive performance and highlighting the importance of risk factors such as smoking, body mass index (BMI), and diabetes<sup>[<xref ref-type="bibr" rid="B10">10</xref>]</sup>. This is particularly relevant in the context of breast reconstruction, where flap vitality is of pivotal significance.</p>
        <p>In the domain of microvascular reconstruction of the head and neck region, ML methods have demonstrated high accuracy in predicting postoperative complications. Tighe <italic>et al</italic>. (2022) applied ML-based risk adjustment to enhance the classification of free flap loss, thereby demonstrating a clinically significant enhancement in decision accuracy when compared to conventional risk models<sup>[<xref ref-type="bibr" rid="B11">11</xref>]</sup>.</p>
        <p>In their systematic review, Kapila <italic>et al</italic>. (2024) identified “preoperative planning” as one of the six main domains for the use of AI in microsurgery. This encompasses the selection of appropriate flaps, perforator analysis based on imaging, and the automatic segmentation of anatomical structures<sup>[<xref ref-type="bibr" rid="B12">12</xref>]</sup>.</p>
        <p>Park <italic>et al</italic>. (2024) describe the use of deep learning-based image analysis from computed tomography (CT) and magnetic resonance imaging (MRI) data to model individual surgical access routes for complex facial reconstructions, thereby enabling patient-centered planning<sup>[<xref ref-type="bibr" rid="B13">13</xref>]</sup>.</p>
      </sec>
      <sec id="sec2-2">
        <title>Predictive modeling for surgical outcomes</title>
        <p>Predictive modeling is a method of quantifying the potential outcomes of surgical procedures, including complications, revision rates and healing progression. This approach constitutes a fundamental element of personalized medicine.</p>
        <p>The concept of “human <italic>vs</italic>. machine” was investigated by Duran <italic>et al</italic>. (2025) in a comparative study. In this study, GPT-4 and Gemini were fed with plastic surgery expertise and then compared with medical decisions in realistic case vignettes. The models demonstrated comparable accuracy for standardized questions and exhibited considerable potential for facilitating decision-making<sup>[<xref ref-type="bibr" rid="B14">14</xref>]</sup>.</p>
        <p>Huang <italic>et al</italic>. (2024) developed an ML model with the objective of predicting donor site complications after deep inferior epigastric perforator (DIEP) flap retrieval. The model demonstrated an accuracy of 82%, facilitating the identification of individual risk factors at an early stage and the planning of preventive measures<sup>[<xref ref-type="bibr" rid="B15">15</xref>]</sup>.</p>
        <p>As asserted by Mansoor and Ibrahim (2025), a narrative review underlines the capacity of AI to assimilate postoperative outcome data into future recommendations through continuous learning. This facilitates the dynamic adaptation of therapy algorithms to actual clinical courses and promotes the principle of the “learning healthcare system”<sup>[<xref ref-type="bibr" rid="B9">9</xref>]</sup>.</p>
        <p>In the study undertaken by Kapila <italic>et al</italic>. (2024), an extensive classification of diverse outcome-related models was conducted, with the emphasis placed on the noteworthy utility of AI applications in the assessment of flap vitality, wound healing disorders and hospitalization duration<sup>[<xref ref-type="bibr" rid="B12">12</xref>]</sup>.</p>
      </sec>
      <sec id="sec2-3">
        <title>Integrating patient data for personalized treatment plans</title>
        <p>The future of reconstructive surgery is predicated on the integration of multiple data sources: Anamnestic information, diagnostic imaging, intraoperative sensor technology and postoperative courses are increasingly being integrated into intelligent, predictive systems.</p>
        <p>In their systematic review, Kiwan <italic>et al</italic>. (2024) analyzed over 30 current studies on the integration of electronic health records (EHRs), imaging procedures and intraoperative data into AI-supported systems. This combination enables individualized, evidence-based treatment planning with greater precision and reduced variability<sup>[<xref ref-type="bibr" rid="B16">16</xref>]</sup>.</p>
        <p>The authors also emphasize the importance of the use of structured data formats [e.g., Fast Healthcare Interoperability Resources (FHIR)] in order to facilitate interoperability of information from different sources, a key aspect for the future development of fully digital surgical workflows<sup>[<xref ref-type="bibr" rid="B16">16</xref>]</sup>.</p>
        <p>Mansoor and Ibrahim (2025) delineated in their study concrete application scenarios of multimodal data integration. To illustrate, they proposed a platform capable of automatically segmenting CT scans, amalgamating them with clinical parameters, and proposing a personalized flap strategy based on historical cases<sup>[<xref ref-type="bibr" rid="B9">9</xref>]</sup>.</p>
        <p>Park <italic>et al</italic>. (2024) posit that the integration of computer vision and natural language processing (NLP) with AI facilitates a profound comprehension of intricate patient data, thereby representing a pivotal progression towards context-sensitive, automated assistance<sup>[<xref ref-type="bibr" rid="B17">17</xref>]</sup>.</p>
      </sec>
    </sec>
    <sec id="sec3">
      <title>ROBOTIC SYSTEMS IN PLASTIC AND RECONSTRUCTIVE SURGERY</title>
      <p>The integration of robotic assistance systems signifies a paradigm shift in the domain of plastic and reconstructive surgery. Robotic systems have been utilized effectively in abdominal and urological surgery for an extended period. Nevertheless, their utilization in reconstructive microsurgery has been subject to delay, primarily due to constraints such as instrument size and the absence of submillimeter precision. However, recent advancements in the domain of specialized microrobots, exemplified by the Symani® Surgical System and MUSA, have engendered novel avenues for utilization, particularly in the realm of vascular and lymphatic anastomoses within the submillimeter range<sup>[<xref ref-type="bibr" rid="B18">18</xref>]</sup>. The ensuing sections are devoted to an exposition of the state of the art, an exposition of the ergonomic advantages, and an exposition of the initial clinical experience.</p>
      <sec id="sec3-1">
        <title>Advances in robotic-assisted microsurgery</title>
        <p>The advent of dedicated microsurgery platforms, such as the Symani® (Medical Microinstruments) and MUSA (MicroSure) systems, has engendered the availability of robotic systems that are specifically optimized for microanatomical procedures to plastic surgeons.</p>
        <p>Symani® employs magnetically controlled, scalable micromanipulators to reduce movements by a factor of up to 20 and completely eliminate tremor<sup>[<xref ref-type="bibr" rid="B5">5</xref>]</sup>. The platform facilitates precise anastomoses on vessels with a diameter of less than 0.8 mm, a task previously exclusive to experienced supermicrosurgeons [<xref ref-type="fig" rid="fig1">Figure 1</xref>].</p>
        <fig id="fig1" position="float">
          <label>Figure 1</label>
          <caption>
            <p>Symani® Surgical System (Medical Microinstruments S.p.A., Pisa, Italy).</p>
          </caption>
          <graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="par120121.fig.1.jpg" />
        </fig>
        <p>Innocenti <italic>et al</italic>. (2023) reported successful anastomoses in lymphatic vessel reconstructions and free flap reconstruction surgery without complications in the first clinical study on the use of Symani®<sup>[<xref ref-type="bibr" rid="B4">4</xref>]</sup>. In their “first-in-human” study with MUSA®, Van Mulken <italic>et al</italic>. (2020) described eight lymphovenous anastomoses that were performed with clinically satisfactory results, thus representing a breakthrough for robotic supermicrosurgery<sup>[<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B19">19</xref>]</sup> [<xref ref-type="fig" rid="fig2">Figure 2</xref>].</p>
        <fig id="fig2" position="float">
          <label>Figure 2</label>
          <caption>
            <p>MUSA® 3 microsurgical robotic systems for robot-assisted micro- and supermicrosurgery (MicroSure B.V., Eindhoven, The Netherlands).</p>
          </caption>
          <graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="par120121.fig.2.jpg" />
        </fig>
      </sec>
      <sec id="sec3-2">
        <title>Enhancing dexterity, precision, and ergonomics</title>
        <p>A significant benefit of robotic systems is their precision, which contributes to a reduction in intraoperative stress. Microsurgical procedures are time-consuming and force-intensive and frequently result in physical fatigue and musculoskeletal complaints among surgeons. Robot-based systems have been developed to compensate for these limitations through motion stabilization, tremor elimination and ergonomic console technology<sup>[<xref ref-type="bibr" rid="B16">16</xref>]</sup>.</p>
        <p>In a series of 100 robotically assisted anastomoses with Symani®, Kapila <italic>et al</italic>. (2024) reported that the surgical burden on assistants and specialists was significantly reduced. The learning curve demonstrated a rapid improvement in operating times, concomitant with consistently high anastomosis quality<sup>[<xref ref-type="bibr" rid="B12">12</xref>]</sup>.</p>
        <p>In the 2024 study by Malzone <italic>et al</italic>., the ergonomic advantages of robotic systems in microsurgery are summarized as a “significant step towards extending the surgical career and improving surgical performance”<sup>[<xref ref-type="bibr" rid="B6">6</xref>]</sup>.</p>
      </sec>
      <sec id="sec3-3">
        <title>Clinical applications and case studies</title>
        <p>The clinical applications documented to date relate primarily to lymphovenous anastomoses for lymphoedema, free perforator flaps [e.g., anterolateral thigh flap (ALT), DIEP] and reconstructive procedures following tumor resection. The Symani® platform has been employed with a high degree of success in a variety of surgical contexts, including breast reconstruction, facial reconstruction and limb reconstruction. Notably, the anastomosis times for vessel diameters of less than 1 mm have been reported to range from 20 to 40 min<sup>[<xref ref-type="bibr" rid="B4">4</xref>,<xref ref-type="bibr" rid="B12">12</xref>]</sup>.</p>
        <p>In a European multicenter case series (2023), there was a consistent achievement of favorable functional and aesthetic outcomes. The complication rate was comparable to that of the conventional technique, but with superior reproducibility and reduced variability between surgeons<sup>[<xref ref-type="bibr" rid="B5">5</xref>,<xref ref-type="bibr" rid="B19">19</xref>]</sup>.</p>
        <p>In the future, the potential applications of robotic microsystems may extend to areas such as plexus reconstruction, peripheral nerve transplantation, and reconstructive lymph node transfers. Initial preclinical studies have confirmed the feasibility of these procedures<sup>[<xref ref-type="bibr" rid="B6">6</xref>,<xref ref-type="bibr" rid="B20">20</xref>]</sup>.</p>
      </sec>
    </sec>
    <sec id="sec4">
      <title>3D SIMULATION AND AR IN SURGICAL PRACTICE</title>
      <p>The integration of digital technologies, including 3D simulation, virtual reality (VR) and AR, is expanding the surgical possibilities in plastic surgery. This expansion encompasses preoperative planning, intraoperative orientation and the training of surgeons. These technologies facilitate detailed visualization of patient-specific anatomy, enhance surgical precision and promote sustainable training<sup>[<xref ref-type="bibr" rid="B21">21</xref>,<xref ref-type="bibr" rid="B22">22</xref>]</sup>.</p>
      <sec id="sec4-1">
        <title>Virtual surgical planning and simulation</title>
        <p>It is evident that 3D VR and AR technologies have a pivotal function in the process of preoperative planning, primarily by enhancing an individual’s spatial thinking capabilities and facilitating improved anatomical orientation. In a systematic review of six studies, Vles <italic>et al</italic>. (2020) demonstrated that AR significantly enhances accuracy, for instance in mandibular osteotomies and the identification of perforators in the DIEP flap. The operating time in DIEP procedures decreased (<italic>P</italic> &lt; 0.01) and the precision of the osteotomy increased significantly when AR navigation was used<sup>[<xref ref-type="bibr" rid="B21">21</xref>]</sup>.</p>
        <p>As Sayadi <italic>et al</italic>. (2019) emphasize, the utilization of AR-based visual models renders complex anatomies more tangible and concomitantly results in a significant reduction in the error rate during 3D-based planning<sup>[<xref ref-type="bibr" rid="B23">23</xref>]</sup>.</p>
      </sec>
      <sec id="sec4-2">
        <title>AR in intraoperative navigation</title>
        <p>AR is also being used with increasing frequency in operating theatres. In the study described by <InlineParagraph>McGraw <italic>et al</italic>.</InlineParagraph> (2023), the authors present a preclinical investigation in which they employed AR glasses, such as the HoloLens, to project 3D holograms of vessels and bones over the surgical field during the intraoperative phase. The accuracy with which anatomical landmarks were localized was in the submillimeter range, representing a significant advancement for surgical assistance that is grounded in realism<sup>[<xref ref-type="bibr" rid="B24">24</xref>]</sup>.</p>
        <p>Cai <italic>et al</italic>. (2021) reported on mixed reality navigation in craniofacial surgery, demonstrating a significant reduction in operating time with an identical level of results<sup>[<xref ref-type="bibr" rid="B25">25</xref>]</sup>.</p>
        <p>As demonstrated by Vles <italic>et al</italic>. (2020), clinical evidence indicates that AR improves intraoperative perforator identification in DIEP flap procedures compared with traditional Doppler<sup>[<xref ref-type="bibr" rid="B21">21</xref>]</sup>.</p>
      </sec>
      <sec id="sec4-3">
        <title>Educational and training applications</title>
        <p>In addition to its clinical application, AR/VR is also of great importance in the training of surgeons. Shafarenko <italic>et al</italic>. (2022) conducted a randomized study of surgeons in training who underwent holo-surgery simulations. As demonstrated in<sup>[<xref ref-type="bibr" rid="B26">26</xref>]</sup>, users demonstrated a marked enhancement in anatomical orientation and surgical accuracy following only a limited number of sessions.</p>
        <p>Touch Surgery, a VR application designed for surgical scenarios, has been validated specifically for plastic and reconstructive surgery. According to Kowalewski <italic>et al</italic>. (2017), this application demonstrated significantly higher cognitive performance in residents than traditional text-based learning methods<sup>[<xref ref-type="bibr" rid="B27">27</xref>]</sup>.</p>
        <p>A recent meta-analysis of VR-based surgical training has confirmed the positive effect on the learning curve, precision and error reduction, particularly in laparoscopic training. However, the potential for adaptation for plastic-reconstructive surgery has also been demonstrated<sup>[<xref ref-type="bibr" rid="B28">28</xref>]</sup>.</p>
      </sec>
    </sec>
    <sec id="sec5">
      <title>LLMs AND CONVERSATIONAL AI IN CLINICAL PRACTICE</title>
      <p>The advent of powerful LLMs, such as GPT-4, has precipitated a paradigm shift in the manner in which clinical information is processed, documented and communicated. In the domain of plastic and reconstructive surgery, there exists potential for application in the domains of automated documentation, patient-centered communication and digital assistance. Such applications are accompanied by critical discussion of the ethical and regulatory aspects.</p>
      <sec id="sec5-1">
        <title>Automating documentation and workflow support</title>
        <p>LLMs have the capacity to expedite and standardize clinical documentation processes, thereby liberating human resources. In the study conducted by Patel <italic>et al</italic>. (2024), an investigation was made into the comparison of educational documents generated by ChatGPT with those created by surgeons. The findings of this investigation revealed that the AI-generated texts were shorter in length (averaging 1,023 words compared to 2,901 words), more structured, and linguistically more accessible, whilst maintaining comparable medical completeness<sup>[<xref ref-type="bibr" rid="B29">29</xref>]</sup>.</p>
        <p>The utilization of AI in clinical protocols and surgical reports has also been demonstrated to be a valuable asset. In a field report from Plastic and Reconstructive Surgery - Global Open (PRS Global Open), LLM-based scribe systems were described that extract surgical documentation from audio transcripts and automatically convert it into structured reports<sup>[<xref ref-type="bibr" rid="B30">30</xref>]</sup>. Whilst these systems hold great promise, they do raise significant concerns with regard to data protection.</p>
        <p>Another example is the SurgeryLLM model, which utilizes Retrieval Augmented Generation (RAG). The generation of responses is informed by external sources, including guidelines and literature. In a preliminary study, SurgeryLLM was able to provide consistent, evidence-based responses to queries pertaining to surgical decision-making. The potential of this tool to enhance clinical efficiency is currently under discussion<sup>[<xref ref-type="bibr" rid="B31">31</xref>]</sup>.</p>
      </sec>
      <sec id="sec5-2">
        <title>Patient communication and preoperative counseling</title>
        <p>LLMs have the capacity to formulate medical information in a patient-friendly manner and to support preoperative counselling. In a study on rhinoplasty consultations, GPT-4 demonstrated an answer accuracy of over 90% for common patient queries, including operation duration, risks and aftercare<sup>[<xref ref-type="bibr" rid="B32">32</xref>]</sup>. The responses thus obtained were evaluated as accurate; however, in certain instances, they were still considered to be overly technical.</p>
        <p>LLMs have the capacity to facilitate the generation of bespoke information sheets. A randomized study found that patients perceived AI-generated forms as “clearer and more structured” in terms of content, although the language complexity was sometimes too high<sup>[<xref ref-type="bibr" rid="B29">29</xref>]</sup>.</p>
        <p>In practical applications, such systems have the potential to function as “virtual assistance”. For instance, they could take the form of chatbots that provide standardized information on a 24-h basis. Preliminary feasibility studies on this matter have been conducted, yielding favorable user feedback<sup>[<xref ref-type="bibr" rid="B32">32</xref>,<xref ref-type="bibr" rid="B33">33</xref>]</sup>.</p>
      </sec>
      <sec id="sec5-3">
        <title>Ethical and regulatory considerations</title>
        <p>Notwithstanding the considerable potential of LLMs, there are substantial challenges in the areas of ethics and regulation. A salient issue pertains to the phenomenon of “hallucination”, characterized by the propensity of LLMs to produce statements that, while resonating with a convincing auditory impression, are devoid of factual veracity. Consequently, Byrd <italic>et al</italic>. (2024) caution against the unrefined integration of AI-generated texts within clinical contexts and advocate for mandatory human oversight<sup>[<xref ref-type="bibr" rid="B34">34</xref>]</sup>.</p>
        <p>It is also important to consider the potential risks associated with algorithmic bias. A systematic review found that only around 60% of the LLM-based studies analyzed had integrated countermeasures to reduce bias, such as dataset stratification by gender, ethnicity, or socioeconomic status<sup>[<xref ref-type="bibr" rid="B35">35</xref>]</sup>.</p>
        <p>Data protection and regulatory compliance represent additional key challenges. Systems that process sensitive patient data, including AI-assisted transcription or documentation platforms (e.g., medical scribe applications), are subject to stringent regulations such as the General Data Protection Regulation (GDPR) in the European Union and the Health Insurance Portability and Accountability Act (HIPAA) in the United States. These frameworks specify requirements for data minimization, encryption, consent, and traceability<sup>[<xref ref-type="bibr" rid="B36">36</xref>,<xref ref-type="bibr" rid="B37">37</xref>]</sup>. Despite the guarantees provided by several providers that no identifiable data are stored, independent analyses have demonstrated that some commercial AI tools may inadvertently retain or reproduce sensitive content, thus posing potential compliance and privacy risks<sup>[<xref ref-type="bibr" rid="B30">30</xref>]</sup>.</p>
        <p>A regulatory debate is underway concerning the categorization of LLMs that facilitate or assist with medically pertinent decisions. The question under discussion is whether these LLMs should be designated as medical devices. The US Food and Drug Administration (FDA) and the European Medical Device Regulation (MDR) are currently developing criteria that will cover these technologies and make them subject to quality, safety and transparency requirements<sup>[<xref ref-type="bibr" rid="B38">38</xref>]</sup>.</p>
      </sec>
    </sec>
    <sec id="sec6">
      <title>CHALLENGES AND LIMITATIONS</title>
      <p>Notwithstanding the considerable progress achieved in the development of AI-based systems and robotic assistance technologies, there are still significant challenges to their implementation in plastic and reconstructive surgery. These issues pertain specifically to the quality and representativeness of the underlying data, the risk of algorithmic bias, and the integration of these systems into routine clinical practice. It is imperative that these aspects are given due consideration to ensure patient safety and promote acceptance by healthcare professionals.</p>
      <sec id="sec6-1">
        <title>Data quality, bias, and generalizability</title>
        <p>A significant challenge in the development and application of AI systems pertains to the quality of the data utilized. It is evident that a significant proportion of AI models are trained on data sets originating from specific geographical regions, specific clinical centres, or homogeneous patient groups. This can result in limited generalizability and inadequate model performance when applied to a broader or different population<sup>[<xref ref-type="bibr" rid="B39">39</xref>]</sup>. As posited by Johnson <italic>et al</italic>. (2016), the validity of such models may be questionable, particularly when implemented in contexts divergent from their original training milieu. This can result in clinically significant misclassifications<sup>[<xref ref-type="bibr" rid="B40">40</xref>]</sup>.</p>
        <p>Another central problem is that of algorithmic bias. This phenomenon may emerge at various levels, including, but not limited to, imbalanced training data, systematic distortions in clinical documentation, or insufficient consideration of social determinants. In a systematic review, Kyaw <italic>et al</italic>. (2019) analyzed a substantial number of surgical AI studies, numbering over 40, and found that only around 60% of the studies explicitly implemented bias compensation measures<sup>[<xref ref-type="bibr" rid="B28">28</xref>,<xref ref-type="bibr" rid="B35">35</xref>]</sup>. Mehrabi <italic>et al</italic>. (2022) also emphasize that bias has the potential to occur at any stage of the AI development cycle, from data collection to annotation to clinical implementation, and that this can have a particularly deleterious effect on marginalized groups<sup>[<xref ref-type="bibr" rid="B41">41</xref>]</sup>.</p>
        <p>The utilization of federated learning is a subject that is being discussed with increasing frequency in order to enhance fairness and generalizability. This approach facilitates the training of AI models on distributed data sets, obviating the necessity for centralized data storage. Concurrently, open science initiatives - such as TRIPOD-AI (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-Artificial Intelligence) and CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) - are being promoted to enable standardized reporting and transparent model evaluation<sup>[<xref ref-type="bibr" rid="B42">42</xref>]</sup>.</p>
      </sec>
      <sec id="sec6-2">
        <title>Integration into clinical workflow</title>
        <p>In addition to methodological quality, the practical integration of AI systems into existing clinical processes is also a significant challenge. Research has demonstrated that a significant proportion of AI applications function in a pilot phase yet ultimately prove ineffective in routine clinical practice. This is primarily attributed to deficiencies in interoperability, standardization, and user acceptance. In particular, the integration of technical components into EHRs, picture archiving systems (PACS), and surgical robotics platforms is impeded by the utilization of incompatible data formats and proprietary interfaces. Gu <italic>et al</italic>. (2022) emphasize that a lack of compatibility with standards such as DICOM (Digital Imaging and Communications in Medicine) and HL7 (Health Level Seven) is a key obstacle to the implementation of imaging-based AI solutions in pathology. This analogy can be transferred to plastic surgery<sup>[<xref ref-type="bibr" rid="B43">43</xref>]</sup>.</p>
        <p>Another salient problem is the so-called “black box problem”: many AI systems are based on deep learning, which makes their decisions difficult for users to understand. In a study on the automated assessment of surgical skills, Liu <italic>et al</italic>. (2020) demonstrated that models lacking an explainable decision logic frequently yielded inaccurate assessments, which directly impacted trust and acceptance by medical staff<sup>[<xref ref-type="bibr" rid="B44">44</xref>]</sup>.</p>
        <p>Furthermore, the training of clinical staff is imperative. The integration of AI-driven systems necessitates not only a technical comprehension, but also the reallocation of roles within the team and the delineation of explicit responsibilities. The Sepsis Watch early warning system is an example of this. It demonstrates that even well-developed AI tools are ignored or mistrusted if their recommendations are not transparent or are made without a clinical context<sup>[<xref ref-type="bibr" rid="B45">45</xref>]</sup>. In such circumstances, the implementation of comprehensive change management strategies and interdisciplinary training programs is imperative.</p>
        <p>It is also imperative to consider regulatory adaptation. While the FDA and the European MDR have classified AI systems as “Software as a Medical Device” (SaMD), experts such as Ong <italic>et al</italic>. (2025) are calling for the establishment of a flexible, lifecycle-oriented approval approach for adaptive AI systems that continue to develop with new data<sup>[<xref ref-type="bibr" rid="B31">31</xref>]</sup>.</p>
      </sec>
    </sec>
    <sec id="sec7">
      <title>FUTURE DIRECTIONS AND INNOVATIONS</title>
      <p>Plastic and reconstructive surgery is undergoing a paradigm shift towards a new era in which AI is no longer considered a mere instrument, but rather an indispensable component of a sophisticated surgical ecosystem characterized by learning, adaptation and personalization. Future developments in this field are expected to concentrate on the personalization of AI models, the integration of multimodal data, and the establishment of a networked surgical environment in which robotic systems, AR, and ML function in a highly coordinated manner. In addition, the emerging concept of digital twins in surgery describes the creation of dynamic, patient-specific virtual replicas that integrate imaging, clinical parameters, and procedural data to simulate surgical strategies, predict outcomes, and continuously adapt treatment planning through real-time data feedback, thereby representing a key step toward truly personalized and predictive surgery<sup>[<xref ref-type="bibr" rid="B46">46</xref>]</sup>.</p>
      <sec id="sec7-1">
        <title>Personalized AI models in plastic surgery</title>
        <p>A significant development in the coming years will be the personalization of AI systems. In lieu of being grounded in extensive, generic training data sets, the future development of models will entail a customization to individual patient characteristics, encompassing anatomical variations, genetic markers and antecedent surgical histories. Mansoor and Ibrahim (2025) describe in their review article the vision of a dynamic learning system that integrates intra- and post-operative data in real time to improve surgical decisions on a personalized level and thus minimize complications<sup>[<xref ref-type="bibr" rid="B9">9</xref>]</sup>.</p>
        <p>An illustration of such personalized applications can be found in preoperative simulation: Huang <italic>et al</italic>. (2024) demonstrated that text-to-image AI models, similar to DALL-E 2, can be utilized to generate preoperative visualizations based on individual anatomical parameters. In their study, patients were shown realistic, AI-generated simulations of a possible post-operative outcome (e.g., a lip lift) within a few minutes, which led to a significant improvement in patient satisfaction<sup>[<xref ref-type="bibr" rid="B15">15</xref>]</sup>.</p>
      </sec>
      <sec id="sec7-2">
        <title>Multimodal AI: integrating imaging, genomics, and patient-reported outcomes</title>
        <p>A promising trend is the development of multimodal AI architectures that combine imaging, clinical data, genetic profiles and patient-reported outcomes (PROMs). Whilst numerous antecedent models were confined to the processing of unidimensional data sources, the integration of multimodal data facilitates more comprehensive and contextualized decision-making processes. Parvin <italic>et al</italic>. (2025) emphasize that such systems are already demonstrating substantial performance enhancements, notably in the domain of oncological diagnostics. This enhancement is exemplified by the integrated evaluation of radiomic image features, mutation status, and laboratory parameters<sup>[<xref ref-type="bibr" rid="B47">47</xref>]</sup>.</p>
        <p>In the field of plastic surgery, this approach involves the integration of CT or MRI imaging for flap selection with genomic risk profiles (e.g., wound healing disorders, fibrosis tendency) and subjective patient assessments. This holistic planning approach aims to provide a comprehensive and multifaceted framework for surgical decision-making. Radiomics-based models have already demonstrated the capacity to automatically analyze texture, shape and vessel progressions, thus rendering them usable for flap planning, as evidenced by Jarvis <italic>et al</italic>.(2020)<sup>[<xref ref-type="bibr" rid="B48">48</xref>]</sup>.</p>
      </sec>
      <sec id="sec7-3">
        <title>Vision for a fully AI-augmented surgical ecosystem</title>
        <p>The long-term vision is to establish an intelligent, AI-supported surgical ecosystem that connects all phases - from indication, planning and execution to aftercare. This encompasses a series of technical and organizational components, which are outlined below: Firstly, the integration of AR interfaces and robotic platforms that combine intraoperative navigation, image overlay and fine motor precision is of paramount importance. Secondly, the utilization of real-time feedback systems, such as post-operative image analysis or PROMs, facilitates continuous enhancement of algorithms through a process known as “crescendo learning”<sup>[<xref ref-type="bibr" rid="B9">9</xref>,<xref ref-type="bibr" rid="B47">47</xref>]</sup>.</p>
        <p>Thirdly, generative AI has the potential to unlock new dimensions of information and communication. The utilization of LLMs and text-to-image systems will facilitate the development of interactive consultations that are tailored to patients’ anatomical and genetic characteristics. Furthermore, cultural and linguistic preferences could also be taken into account, as demonstrated by Genovese <italic>et al</italic>. (2025) in the context of GPT-supported rhinoplasty consultations<sup>[<xref ref-type="bibr" rid="B32">32</xref>]</sup>.</p>
        <p>Recent advancements in the field of regenerative medicine have seen the commencement of endeavors to integrate AI-controlled systems with bioprinting technologies. This integration is aimed at the creation of patient-specific tissue structures, which are derived from individual 3D data and gene expression profiles. However, the integration of such innovations necessitates the establishment of explicit ethical and regulatory guidelines, as emphasized in recent declarations by international professional societies<sup>[<xref ref-type="bibr" rid="B49">49</xref>]</sup>.</p>
        <p>In conclusion, it is evident that the utilization of AI in plastic surgery is evolving beyond a mere assistive technology, with the potential to function as the central coordinating element of a networked, adaptive, and learning surgical system in the future. The challenge that now lies before us is to transform this vision into an ethically acceptable, regulatory and clinically feasible reality.</p>
      </sec>
      <sec id="sec7-4">
        <title>Expert opinion</title>
        <p>From an expert perspective, current robotic and AI-driven technologies in plastic and reconstructive surgery represent a highly promising yet transitional stage of development. Although dedicated microsurgical robotic platforms already demonstrate clear benefits in precision, tremor elimination, and ergonomics, important limitations persist, particularly regarding workflow integration, interoperability with imaging and planning systems, and the absence of intuitive haptic feedback. To fully realize their clinical and educational potential, future developments should focus on tighter integration between robotic execution, AI-based surgical planning, and adaptive outcome analysis, enabling patient-specific optimization and continuous learning. Such convergence may ultimately support the evolution toward intelligent, digitally augmented surgical ecosystems that enhance reproducibility, training, and long-term surgical performance.</p>
      </sec>
    </sec>
    <sec id="sec8">
      <title>CONCLUSION</title>
      <p>The integration of robotic systems, AI, and AR represents a fundamental shift in plastic and reconstructive surgery. Dedicated microsurgical platforms such as Symani® and MUSA enable robotic precision in the submillimetre range, improving dexterity, reproducibility, and ergonomics. Concurrently, AI-driven tools support preoperative planning, outcome prediction, and intraoperative guidance by integrating multimodal clinical data.</p>
      <p>To translate these innovations into routine practice, robust clinical validation, standardized data frameworks, and clear regulatory guidance are essential. Future research should focus on multicentre studies, interoperability with clinical information systems, and structured training programs. As personalized and adaptive AI models evolve, they will increasingly support patient-specific, predictive surgical strategies.</p>
      <p>The future of plastic surgery will not be defined by the replacement of human skill, but by a symbiotic partnership between surgeon and machine - an intelligent surgical ecosystem.</p>
    </sec>
  </body>
  <back>
    <sec>
      <title>DECLARATIONS</title>
      <sec>
        <title>Authors’ contributions</title>
        <p>Conceptualization and design of the work: Radtke C, Fast A, Novotny MJ</p>
        <p>Investigation and literature review: Novotny MJ, Anna Fast</p>
        <p>Drafting of the manuscript: Novotny MJ, Fast A</p>
        <p>Critical revision of the manuscript for important intellectual content: Fast A, Radtke C</p>
        <p>Supervision: Radtke C</p>
        <p>All authors have read and approved the final version of the manuscript.</p>
      </sec>
      <sec>
        <title>Availability of data and materials</title>
        <p>Not applicable.</p>
      </sec>
      <sec>
        <title>AI and AI-assisted tools statement</title>
        <p>During the preparation of this manuscript, the AI tools Elicit (Ought, version 2.0, released 2023-11-15) and ChatGPT (OpenAI, GPT-5.3, released 2025-12-15) were used solely for language editing. The tools did not influence the study design, data collection, analysis, interpretation, or the scientific content of the work. All authors take full responsibility for the accuracy, integrity, and final content of the manuscript.</p>
      </sec>
      <sec>
        <title>Financial support and sponsorship</title>
        <p>None.</p>
      </sec>
      <sec>
        <title>Conflicts of interest</title>
        <p>All authors declared that there are no conflicts of interest.</p>
      </sec>
      <sec>
        <title>Ethical approval and consent to participate</title>
        <p>Not applicable.</p>
      </sec>
      <sec>
        <title>Consent for publication</title>
        <p>Not applicable.</p>
      </sec>
      <sec>
        <title>Copyright</title>
        <p>© The Author(s) 2026.</p>
      </sec>
    </sec>
    <ref-list>
      <ref id="B1">
        <label>1</label>
        <nlm-citation publication-type="web">
          <person-group person-group-type="author">
            <name>
              <surname>Gillies</surname>
              <given-names>SHD</given-names>
            </name>
            <name>
              <surname>Millard</surname>
              <given-names>DR</given-names>
            </name>
          </person-group>
          <comment>Principles and art of plastic surgery. Available from: <uri xlink:href="https://archive.org/details/principlesartofp0000gill/page/n7/mode/2up">https://archive.org/details/principlesartofp0000gill/page/n7/mode/2up</uri>. [Last accessed on 28 Feb 2026].</comment>
        </nlm-citation>
      </ref>
      <ref id="B2">
        <label>2</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Ruccia</surname>
              <given-names>F</given-names>
            </name>
            <name>
              <surname>Mavilakandy</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Imtiaz</surname>
              <given-names>H</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>The application of robotics in plastic and reconstructive surgery: a systematic review</article-title>
          <source>Int J Med Robot</source>
          <year>2024</year>
          <volume>20</volume>
          <fpage>e2661</fpage>
          <pub-id pub-id-type="doi">10.1002/rcs.2661</pub-id>
          <pub-id pub-id-type="pmid">39004949</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B3">
        <label>3</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Tan</surname>
              <given-names>YPA</given-names>
            </name>
            <name>
              <surname>Liverneaux</surname>
              <given-names>P</given-names>
            </name>
            <name>
              <surname>Wong</surname>
              <given-names>JKF</given-names>
            </name>
          </person-group>
          <article-title>Current limitations of surgical robotics in reconstructive plastic microsurgery</article-title>
          <source>Front Surg</source>
          <year>2018</year>
          <volume>5</volume>
          <fpage>22</fpage>
          <pub-id pub-id-type="doi">10.3389/fsurg.2018.00022</pub-id>
          <pub-id pub-id-type="pmid">29740585</pub-id>
          <pub-id pub-id-type="pmcid">PMC5931136</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B4">
        <label>4</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Innocenti</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Malzone</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Menichini</surname>
              <given-names>G</given-names>
            </name>
          </person-group>
          <article-title>First-in-human free flap tissue reconstruction using a dedicated microsurgical robotic platform</article-title>
          <source>Plast Reconstr Surg</source>
          <year>2023</year>
          <volume>151</volume>
          <fpage>1078</fpage>
          <lpage>82</lpage>
          <pub-id pub-id-type="doi">10.1097/prs.0000000000010108</pub-id>
          <pub-id pub-id-type="pmid">36563175</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B5">
        <label>5</label>
        <nlm-citation publication-type="journal">
          <article-title>van Mulken TJM, Schols RM, Scharmga AMJ, et al; MicroSurgical Robot Research Group. First-in-human robotic supermicrosurgery using a dedicated microsurgical robot for treating breast cancer-related lymphedema: a randomized pilot trial</article-title>
          <source>Nat Commun</source>
          <year>2020</year>
          <volume>11</volume>
          <fpage>757</fpage>
          <pub-id pub-id-type="doi">10.1038/s41467-019-14188-w</pub-id>
          <pub-id pub-id-type="pmid">32047155</pub-id>
          <pub-id pub-id-type="pmcid">PMC7012819</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B6">
        <label>6</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Malzone</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Menichini</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Innocenti</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Ballestín</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Microsurgical robotic system enables the performance of microvascular anastomoses: a randomized in vivo preclinical trial</article-title>
          <source>Sci Rep</source>
          <year>2023</year>
          <volume>13</volume>
          <fpage>14003</fpage>
          <pub-id pub-id-type="doi">10.1038/s41598-023-41143-z</pub-id>
          <pub-id pub-id-type="pmid">37635195</pub-id>
          <pub-id pub-id-type="pmcid">PMC10460789</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B7">
        <label>7</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Yu</surname>
              <given-names>E</given-names>
            </name>
            <name>
              <surname>Chu</surname>
              <given-names>X</given-names>
            </name>
            <name>
              <surname>Zhang</surname>
              <given-names>W</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Large language models in medicine: applications, challenges, and future directions</article-title>
          <source>Int J Med Sci</source>
          <year>2025</year>
          <volume>22</volume>
          <fpage>2792</fpage>
          <lpage>801</lpage>
          <pub-id pub-id-type="doi">10.7150/ijms.111780</pub-id>
          <pub-id pub-id-type="pmid">40520893</pub-id>
          <pub-id pub-id-type="pmcid">PMC12163604</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B8">
        <label>8</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Farid</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Fernando Botero Gutierrez</surname>
              <given-names>L</given-names>
            </name>
            <name>
              <surname>Ortiz</surname>
              <given-names>S</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Artificial intelligence in plastic surgery: insights from plastic surgeons, education integration, chatgpt’s survey predictions, and the path forward</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2024</year>
          <volume>12</volume>
          <fpage>e5515</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000005515</pub-id>
          <pub-id pub-id-type="pmid">38204870</pub-id>
          <pub-id pub-id-type="pmcid">PMC10781127</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B9">
        <label>9</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Mansoor</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Ibrahim</surname>
              <given-names>AF</given-names>
            </name>
          </person-group>
          <article-title>The transformative role of artificial intelligence in plastic and reconstructive surgery: challenges and opportunities</article-title>
          <source>J Clin Med</source>
          <year>2025</year>
          <volume>14</volume>
          <fpage>2698</fpage>
          <pub-id pub-id-type="doi">10.3390/jcm14082698</pub-id>
          <pub-id pub-id-type="pmid">40283528</pub-id>
          <pub-id pub-id-type="pmcid">PMC12028257</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B10">
        <label>10</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Hassan</surname>
              <given-names>AM</given-names>
            </name>
            <name>
              <surname>Biaggi</surname>
              <given-names>AP</given-names>
            </name>
            <name>
              <surname>Asaad</surname>
              <given-names>M</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Development and assessment of machine learning models for individualized risk assessment of mastectomy skin flap necrosis</article-title>
          <source>Ann Surg</source>
          <year>2023</year>
          <volume>278</volume>
          <fpage>e123</fpage>
          <lpage>30</lpage>
          <pub-id pub-id-type="doi">10.1097/sla.0000000000005386</pub-id>
          <pub-id pub-id-type="pmid">35129476</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B11">
        <label>11</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Tighe</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>McMahon</surname>
              <given-names>J</given-names>
            </name>
            <name>
              <surname>Schilling</surname>
              <given-names>C</given-names>
            </name>
            <name>
              <surname>Ho</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Provost</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Freitas</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Machine learning methods applied to risk adjustment of cumulative sum chart methodology to audit free flap outcomes after head and neck surgery</article-title>
          <source>Br J Oral Maxillofac Surg</source>
          <year>2022</year>
          <volume>60</volume>
          <fpage>1353</fpage>
          <lpage>61</lpage>
          <pub-id pub-id-type="doi">10.1016/j.bjoms.2022.09.007</pub-id>
          <pub-id pub-id-type="pmid">36379810</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B12">
        <label>12</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kapila</surname>
              <given-names>AK</given-names>
            </name>
            <name>
              <surname>Georgiou</surname>
              <given-names>L</given-names>
            </name>
            <name>
              <surname>Hamdi</surname>
              <given-names>M</given-names>
            </name>
          </person-group>
          <article-title>Decoding the impact of AI on microsurgery: systematic review and classification of six subdomains for future development</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2024</year>
          <volume>12</volume>
          <fpage>e6323</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000006323</pub-id>
          <pub-id pub-id-type="pmid">39568680</pub-id>
          <pub-id pub-id-type="pmcid">PMC11578208</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B13">
        <label>13</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Park</surname>
              <given-names>BJ</given-names>
            </name>
            <name>
              <surname>Hunt</surname>
              <given-names>SJ</given-names>
            </name>
            <name>
              <surname>Martin C 3rd</surname>
            </name>
            <name>
              <surname>Nadolski</surname>
              <given-names>GJ</given-names>
            </name>
            <name>
              <surname>Wood</surname>
              <given-names>BJ</given-names>
            </name>
            <name>
              <surname>Gade</surname>
              <given-names>TP</given-names>
            </name>
          </person-group>
          <article-title>Augmented and mixed reality: technologies for enhancing the future of IR</article-title>
          <source>J Vasc Interv Radiol</source>
          <year>2020</year>
          <volume>31</volume>
          <fpage>1074</fpage>
          <lpage>82</lpage>
          <pub-id pub-id-type="doi">10.1016/j.jvir.2019.09.020</pub-id>
          <pub-id pub-id-type="pmid">32061520</pub-id>
          <pub-id pub-id-type="pmcid">PMC7311237</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B14">
        <label>14</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Duran</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Demiröz</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Çörtük</surname>
              <given-names>O</given-names>
            </name>
            <name>
              <surname>Ok</surname>
              <given-names>B</given-names>
            </name>
            <name>
              <surname>Özten</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Eroğlu</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>Human vs machine: the future of decision-making in plastic and reconstructive surgery</article-title>
          <source>Aesthet Surg J</source>
          <year>2025</year>
          <volume>45</volume>
          <fpage>434</fpage>
          <lpage>40</lpage>
          <pub-id pub-id-type="doi">10.1093/asj/sjaf015</pub-id>
          <pub-id pub-id-type="pmid">39862057</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B15">
        <label>15</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Huang</surname>
              <given-names>H</given-names>
            </name>
            <name>
              <surname>Lu Wang</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Chen</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Chadab</surname>
              <given-names>TM</given-names>
            </name>
            <name>
              <surname>Vernice</surname>
              <given-names>NA</given-names>
            </name>
            <name>
              <surname>Otterburn</surname>
              <given-names>DM</given-names>
            </name>
          </person-group>
          <article-title>A machine learning approach to predicting donor site complications following DIEP flap harvest</article-title>
          <source>J Reconstr Microsurg</source>
          <year>2024</year>
          <volume>40</volume>
          <fpage>70</fpage>
          <lpage>7</lpage>
          <pub-id pub-id-type="doi">10.1055/a-2071-3368</pub-id>
          <pub-id pub-id-type="pmid">37040876</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B16">
        <label>16</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kiwan</surname>
              <given-names>O</given-names>
            </name>
            <name>
              <surname>Al-Kalbani</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Rafie</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Hijazi</surname>
              <given-names>Y</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence in plastic surgery, where do we stand?</article-title>
          <source>JPRAS Open</source>
          <year>2024</year>
          <volume>42</volume>
          <fpage>234</fpage>
          <lpage>43</lpage>
          <pub-id pub-id-type="doi">10.1016/j.jpra.2024.09.003</pub-id>
          <pub-id pub-id-type="pmid">39435018</pub-id>
          <pub-id pub-id-type="pmcid">PMC11491964</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B17">
        <label>17</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Park</surname>
              <given-names>KW</given-names>
            </name>
            <name>
              <surname>Diop</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Willens</surname>
              <given-names>SH</given-names>
            </name>
            <name>
              <surname>Pepper</surname>
              <given-names>JP</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence in facial plastics and reconstructive surgery</article-title>
          <source>Otolaryngol Clin North Am</source>
          <year>2024</year>
          <volume>57</volume>
          <fpage>843</fpage>
          <lpage>52</lpage>
          <pub-id pub-id-type="doi">10.1016/j.otc.2024.05.002</pub-id>
          <pub-id pub-id-type="pmid">38971626</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B18">
        <label>18</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>von Reibnitz</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>Weinzierl</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Barbon</surname>
              <given-names>C</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>100 anastomoses: a two-year single-center experience with robotic-assisted micro- and supermicrosurgery for lymphatic reconstruction</article-title>
          <source>J Robot Surg</source>
          <year>2024</year>
          <volume>18</volume>
          <fpage>164</fpage>
          <pub-id pub-id-type="doi">10.1007/s11701-024-01937-3</pub-id>
          <pub-id pub-id-type="pmid">38581589</pub-id>
          <pub-id pub-id-type="pmcid">PMC10998780</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B19">
        <label>19</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kueckelhaus</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Nistor</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>van Mulken</surname>
              <given-names>T</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Clinical experience in open robotic-assisted microsurgery: user consensus of the European Federation of Societies for Microsurgery</article-title>
          <source>J Robot Surg</source>
          <year>2025</year>
          <volume>19</volume>
          <fpage>171</fpage>
          <pub-id pub-id-type="doi">10.1007/s11701-025-02338-w</pub-id>
          <pub-id pub-id-type="pmid">40263142</pub-id>
          <pub-id pub-id-type="pmcid">PMC12014844</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B20">
        <label>20</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Rusch</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Hoffmann</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Wieker</surname>
              <given-names>H</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Evaluation of the MMI Symani(®) robotic microsurgical system for coronary-bypass anastomoses in a cadaveric porcine model</article-title>
          <source>J Robot Surg</source>
          <year>2024</year>
          <volume>18</volume>
          <fpage>168</fpage>
          <pub-id pub-id-type="doi">10.1007/s11701-024-01921-x</pub-id>
          <pub-id pub-id-type="pmid">38598047</pub-id>
          <pub-id pub-id-type="pmcid">PMC11006781</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B21">
        <label>21</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Vles</surname>
              <given-names>MD</given-names>
            </name>
            <name>
              <surname>Terng</surname>
              <given-names>NCO</given-names>
            </name>
            <name>
              <surname>Zijlstra</surname>
              <given-names>K</given-names>
            </name>
            <name>
              <surname>Mureau</surname>
              <given-names>MAM</given-names>
            </name>
            <name>
              <surname>Corten</surname>
              <given-names>EML</given-names>
            </name>
          </person-group>
          <article-title>Virtual and augmented reality for preoperative planning in plastic surgical procedures: a systematic review</article-title>
          <source>J Plast Reconstr Aesthet Surg</source>
          <year>2020</year>
          <volume>73</volume>
          <fpage>1951</fpage>
          <lpage>9</lpage>
          <pub-id pub-id-type="doi">10.1016/j.bjps.2020.05.081</pub-id>
          <pub-id pub-id-type="pmid">32622713</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B22">
        <label>22</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kim</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Kim</surname>
              <given-names>H</given-names>
            </name>
            <name>
              <surname>Kim</surname>
              <given-names>YO</given-names>
            </name>
          </person-group>
          <article-title>Virtual reality and augmented reality in plastic surgery: a review</article-title>
          <source>Arch Plast Surg</source>
          <year>2017</year>
          <volume>44</volume>
          <fpage>179</fpage>
          <lpage>87</lpage>
          <pub-id pub-id-type="doi">10.5999/aps.2017.44.3.179</pub-id>
          <pub-id pub-id-type="pmid">28573091</pub-id>
          <pub-id pub-id-type="pmcid">PMC5447526</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B23">
        <label>23</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Sayadi</surname>
              <given-names>LR</given-names>
            </name>
            <name>
              <surname>Naides</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Eng</surname>
              <given-names>M</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>The new frontier: a review of augmented reality and virtual reality in plastic surgery</article-title>
          <source>Aesthet Surg J</source>
          <year>2019</year>
          <volume>39</volume>
          <fpage>1007</fpage>
          <lpage>16</lpage>
          <pub-id pub-id-type="doi">10.1093/asj/sjz043</pub-id>
          <pub-id pub-id-type="pmid">30753313</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B24">
        <label>24</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>McGraw</surname>
              <given-names>JR</given-names>
            </name>
            <name>
              <surname>Wakim</surname>
              <given-names>JJ</given-names>
            </name>
            <name>
              <surname>Gallagher</surname>
              <given-names>RS</given-names>
            </name>
            <name>
              <surname>Kovach SJ 3rd</surname>
            </name>
          </person-group>
          <article-title>Intraoperative navigation in plastic surgery with augmented reality: a preclinical validation study</article-title>
          <source>Plast Reconstr Surg</source>
          <year>2023</year>
          <volume>151</volume>
          <fpage>170e</fpage>
          <lpage>1</lpage>
          <pub-id pub-id-type="doi">10.1097/prs.0000000000009758</pub-id>
          <pub-id pub-id-type="pmid">36576839</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B25">
        <label>25</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Cai</surname>
              <given-names>EZ</given-names>
            </name>
            <name>
              <surname>Gao</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Ngiam</surname>
              <given-names>KY</given-names>
            </name>
            <name>
              <surname>Lim</surname>
              <given-names>TC</given-names>
            </name>
          </person-group>
          <article-title>Mixed reality intraoperative navigation in craniomaxillofacial surgery</article-title>
          <source>Plast Reconstr Surg</source>
          <year>2021</year>
          <volume>148</volume>
          <fpage>686e</fpage>
          <lpage>8</lpage>
          <pub-id pub-id-type="doi">10.1097/prs.0000000000008375</pub-id>
          <pub-id pub-id-type="pmid">34495911</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B26">
        <label>26</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Shafarenko</surname>
              <given-names>MS</given-names>
            </name>
            <name>
              <surname>Catapano</surname>
              <given-names>J</given-names>
            </name>
            <name>
              <surname>Hofer</surname>
              <given-names>SOP</given-names>
            </name>
            <name>
              <surname>Murphy</surname>
              <given-names>BD</given-names>
            </name>
          </person-group>
          <article-title>The role of augmented reality in the next phase of surgical education</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2022</year>
          <volume>10</volume>
          <fpage>e4656</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000004656</pub-id>
          <pub-id pub-id-type="pmid">36348749</pub-id>
          <pub-id pub-id-type="pmcid">PMC9633082</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B27">
        <label>27</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kowalewski</surname>
              <given-names>KF</given-names>
            </name>
            <name>
              <surname>Hendrie</surname>
              <given-names>JD</given-names>
            </name>
            <name>
              <surname>Schmidt</surname>
              <given-names>MW</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Validation of the mobile serious game application Touch Surgery<sup>TM</sup> for cognitive training and assessment of laparoscopic cholecystectomy</article-title>
          <source>Surg Endosc</source>
          <year>2017</year>
          <volume>31</volume>
          <fpage>4058</fpage>
          <lpage>66</lpage>
          <pub-id pub-id-type="doi">10.1007/s00464-017-5452-x</pub-id>
          <pub-id pub-id-type="pmid">28281111</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B28">
        <label>28</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Kyaw</surname>
              <given-names>BM</given-names>
            </name>
            <name>
              <surname>Saxena</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Posadzki</surname>
              <given-names>P</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Virtual reality for health professions education: systematic review and meta-analysis by the digital health education collaboration</article-title>
          <source>J Med Internet Res</source>
          <year>2019</year>
          <volume>21</volume>
          <fpage>e12959</fpage>
          <pub-id pub-id-type="doi">10.2196/12959</pub-id>
          <pub-id pub-id-type="pmid">30668519</pub-id>
          <pub-id pub-id-type="pmcid">PMC6362387</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B29">
        <label>29</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Patel</surname>
              <given-names>I</given-names>
            </name>
            <name>
              <surname>Om</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Cuzzone</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>Garcia Nores</surname>
              <given-names>G</given-names>
            </name>
          </person-group>
          <article-title>Comparing ChatGPT vs surgeon-generated informed consent documentation for plastic surgery procedures</article-title>
          <source>Aesthet Surg J Open Forum</source>
          <year>2024</year>
          <volume>6</volume>
          <fpage>ojae092</fpage>
          <pub-id pub-id-type="doi">10.1093/asjof/ojae092</pub-id>
          <pub-id pub-id-type="pmid">39544451</pub-id>
          <pub-id pub-id-type="pmcid">PMC11561908</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B30">
        <label>30</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Mess</surname>
              <given-names>SA</given-names>
            </name>
            <name>
              <surname>Mackey</surname>
              <given-names>AJ</given-names>
            </name>
            <name>
              <surname>Yarowsky</surname>
              <given-names>DE</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence scribe and large language model technology in healthcare documentation: advantages, limitations, and recommendations</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2025</year>
          <volume>13</volume>
          <fpage>e6450</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000006450</pub-id>
          <pub-id pub-id-type="pmid">39823022</pub-id>
          <pub-id pub-id-type="pmcid">PMC11737491</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B31">
        <label>31</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Ong</surname>
              <given-names>CS</given-names>
            </name>
            <name>
              <surname>Obey</surname>
              <given-names>NT</given-names>
            </name>
            <name>
              <surname>Zheng</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Cohan</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Schneider</surname>
              <given-names>EB</given-names>
            </name>
          </person-group>
          <article-title>SurgeryLLM: a retrieval-augmented generation large language model framework for surgical decision support and workflow enhancement</article-title>
          <source>NPJ Digit Med</source>
          <year>2024</year>
          <volume>7</volume>
          <fpage>364</fpage>
          <pub-id pub-id-type="doi">10.1038/s41746-024-01391-3</pub-id>
          <pub-id pub-id-type="pmid">39695316</pub-id>
          <pub-id pub-id-type="pmcid">PMC11655968</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B32">
        <label>32</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Genovese</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Prabha</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Borna</surname>
              <given-names>S</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Artificial intelligence for patient support: assessing retrieval-augmented generation for answering postoperative rhinoplasty questions</article-title>
          <source>Aesthet Surg J</source>
          <year>2025</year>
          <volume>45</volume>
          <fpage>735</fpage>
          <lpage>44</lpage>
          <pub-id pub-id-type="doi">10.1093/asj/sjaf038</pub-id>
          <pub-id pub-id-type="pmid">40088460</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B33">
        <label>33</label>
        <nlm-citation publication-type="book">
        <comment> Song T, Pabst F, Eck U, Navab N. Enhancing patient acceptance of robotic ultrasound through conversational virtual agent and immersive visualizations. <italic>arXiv</italic> 2025; arXiv:2502.10088.</comment>
          <pub-id pub-id-type="doi">10.48550/arXiv.2502.10088</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B34">
        <label>34</label>
        <nlm-citation publication-type="journal">
          <article-title>Byrd Iv TF, Tignanelli CJ. Artificial intelligence in surgery - a narrative review</article-title>
          <source>J Med Artif Intell</source>
          <year>2024</year>
          <volume>7</volume>
          <fpage>29</fpage>
          <pub-id pub-id-type="doi">10.21037/jmai-24-111</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B35">
        <label>35</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Pressman</surname>
              <given-names>SM</given-names>
            </name>
            <name>
              <surname>Borna</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Gomez-Cabello</surname>
              <given-names>CA</given-names>
            </name>
            <name>
              <surname>Haider</surname>
              <given-names>SA</given-names>
            </name>
            <name>
              <surname>Haider</surname>
              <given-names>C</given-names>
            </name>
            <name>
              <surname>Forte</surname>
              <given-names>AJ</given-names>
            </name>
          </person-group>
          <article-title>AI and ethics: a systematic review of the ethical considerations of large language model use in surgery research</article-title>
          <source>Healthcare</source>
          <year>2024</year>
          <volume>12</volume>
          <fpage>825</fpage>
          <pub-id pub-id-type="doi">10.3390/healthcare12080825</pub-id>
          <pub-id pub-id-type="pmid">38667587</pub-id>
          <pub-id pub-id-type="pmcid">PMC11050155</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B36">
        <label>36</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Gilbert</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>The EU passes the AI Act and its implications for digital medicine are unclear</article-title>
          <source>NPJ Digit Med</source>
          <year>2024</year>
          <volume>7</volume>
          <fpage>135</fpage>
          <pub-id pub-id-type="doi">10.1038/s41746-024-01116-6</pub-id>
          <pub-id pub-id-type="pmid">38778162</pub-id>
          <pub-id pub-id-type="pmcid">PMC11111757</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B37">
        <label>37</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Vokinger</surname>
              <given-names>KN</given-names>
            </name>
            <name>
              <surname>Gasser</surname>
              <given-names>U</given-names>
            </name>
          </person-group>
          <article-title>Regulating AI in medicine in the United States and Europe</article-title>
          <source>Nat Mach Intell</source>
          <year>2021</year>
          <volume>3</volume>
          <fpage>738</fpage>
          <lpage>9</lpage>
          <pub-id pub-id-type="doi">10.1038/s42256-021-00386-z</pub-id>
          <pub-id pub-id-type="pmid">34604702</pub-id>
          <pub-id pub-id-type="pmcid">PMC7611759</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B38">
        <label>38</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Warraich</surname>
              <given-names>HJ</given-names>
            </name>
            <name>
              <surname>Tazbaz</surname>
              <given-names>T</given-names>
            </name>
            <name>
              <surname>Califf</surname>
              <given-names>RM</given-names>
            </name>
          </person-group>
          <article-title>FDA perspective on the regulation of artificial intelligence in health care and biomedicine</article-title>
          <source>JAMA</source>
          <year>2025</year>
          <volume>333</volume>
          <fpage>241</fpage>
          <lpage>7</lpage>
          <pub-id pub-id-type="doi">10.1001/jama.2024.21451</pub-id>
          <pub-id pub-id-type="pmid">39405330</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B39">
        <label>39</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Oakden-Rayner</surname>
              <given-names>L</given-names>
            </name>
            <name>
              <surname>Dunnmon</surname>
              <given-names>J</given-names>
            </name>
            <name>
              <surname>Carneiro</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Ré</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>Hidden stratification causes clinically meaningful failures in machine learning for medical imaging</article-title>
          <source>Proc ACM Conf Health Inference Learn</source>
          <year>2020</year>
          <volume>2020</volume>
          <fpage>151</fpage>
          <lpage>9</lpage>
          <pub-id pub-id-type="doi">10.1145/3368555.3384468</pub-id>
          <pub-id pub-id-type="pmid">33196064</pub-id>
          <pub-id pub-id-type="pmcid">PMC7665161</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B40">
        <label>40</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Johnson</surname>
              <given-names>AE</given-names>
            </name>
            <name>
              <surname>Ghassemi</surname>
              <given-names>MM</given-names>
            </name>
            <name>
              <surname>Nemati</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Niehaus</surname>
              <given-names>KE</given-names>
            </name>
            <name>
              <surname>Clifton</surname>
              <given-names>DA</given-names>
            </name>
            <name>
              <surname>Clifford</surname>
              <given-names>GD</given-names>
            </name>
          </person-group>
          <article-title>Machine learning and decision support in critical care</article-title>
          <source>Proc IEEE Inst Electr Electron Eng</source>
          <year>2016</year>
          <volume>104</volume>
          <fpage>444</fpage>
          <lpage>66</lpage>
          <pub-id pub-id-type="doi">10.1109/jproc.2015.2501978</pub-id>
          <pub-id pub-id-type="pmid">27765959</pub-id>
          <pub-id pub-id-type="pmcid">PMC5066876</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B41">
        <label>41</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Mehrabi</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Morstatter</surname>
              <given-names>F</given-names>
            </name>
            <name>
              <surname>Saxena</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Lerman</surname>
              <given-names>K</given-names>
            </name>
            <name>
              <surname>Galstyan</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>A survey on bias and fairness in machine learning</article-title>
          <source>ACM Comput Surv</source>
          <year>2021</year>
          <volume>54</volume>
          <fpage>1</fpage>
          <lpage>35</lpage>
          <pub-id pub-id-type="doi">10.1145/3457607</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B42">
        <label>42</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Collins</surname>
              <given-names>GS</given-names>
            </name>
            <name>
              <surname>Moons</surname>
              <given-names>KGM</given-names>
            </name>
          </person-group>
          <article-title>Reporting of artificial intelligence prediction models</article-title>
          <source>Lancet</source>
          <year>2019</year>
          <volume>393</volume>
          <fpage>1577</fpage>
          <lpage>9</lpage>
          <pub-id pub-id-type="doi">10.1016/s0140-6736(19)30037-6</pub-id>
          <pub-id pub-id-type="pmid">31007185</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B43">
        <label>43</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Gu</surname>
              <given-names>H</given-names>
            </name>
            <name>
              <surname>Liang</surname>
              <given-names>Y</given-names>
            </name>
            <name>
              <surname>Xu</surname>
              <given-names>Y</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Improving workflow integration with xPath: design and evaluation of a human-AI diagnosis system in pathology</article-title>
          <source>ACM Trans Comput Hum Interact</source>
          <year>2023</year>
          <volume>30</volume>
          <fpage>1</fpage>
          <lpage>37</lpage>
          <pub-id pub-id-type="doi">10.1145/3577011</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B44">
        <label>44</label>
        <nlm-citation publication-type="book">
        <comment>Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. <italic>Nat Med.</italic> 2020;26:1364-74.</comment>
          <pub-id pub-id-type="pmid">33328048</pub-id>
          <pub-id pub-id-type="pmcid">PMC8183333</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B45">
        <label>45</label>
        <nlm-citation publication-type="web">
          <comment>WIRED. AI can help patients - but only if doctors understand it. Available from: <uri xlink:href="https://www.wired.com/story/ai-help-patients-doctors-understand/">https://www.wired.com/story/ai-help-patients-doctors-understand/</uri>. [Last accessed on 28 Feb 2026].</comment>
        </nlm-citation>
      </ref>
      <ref id="B46">
        <label>46</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Bruynseels</surname>
              <given-names>K</given-names>
            </name>
            <name>
              <surname>Santoni de Sio</surname>
              <given-names>F</given-names>
            </name>
            <name>
              <surname>van den Hoven</surname>
              <given-names>J</given-names>
            </name>
          </person-group>
          <article-title>Digital twins in health care: ethical implications of an emerging engineering paradigm</article-title>
          <source>Front Genet</source>
          <year>2018</year>
          <volume>9</volume>
          <fpage>31</fpage>
          <pub-id pub-id-type="doi">10.3389/fgene.2018.00031</pub-id>
          <pub-id pub-id-type="pmid">29487613</pub-id>
          <pub-id pub-id-type="pmcid">PMC5816748</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B47">
        <label>47</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Parvin</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Joo</surname>
              <given-names>SW</given-names>
            </name>
            <name>
              <surname>Jung</surname>
              <given-names>JH</given-names>
            </name>
            <name>
              <surname>Mandal</surname>
              <given-names>TK</given-names>
            </name>
          </person-group>
          <article-title>Multimodal AI in biomedicine: pioneering the future of biomaterials, diagnostics, and personalized healthcare</article-title>
          <source>Nanomaterials</source>
          <year>2025</year>
          <volume>15</volume>
          <fpage>895</fpage>
          <pub-id pub-id-type="doi">10.3390/nano15120895</pub-id>
          <pub-id pub-id-type="pmid">40559258</pub-id>
          <pub-id pub-id-type="pmcid">PMC12195918</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B48">
        <label>48</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Jarvis</surname>
              <given-names>T</given-names>
            </name>
            <name>
              <surname>Thornburg</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>Rebecca</surname>
              <given-names>AM</given-names>
            </name>
            <name>
              <surname>Teven</surname>
              <given-names>CM</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence in plastic surgery: current applications, future directions, and ethical implications</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2020</year>
          <volume>8</volume>
          <fpage>e3200</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000003200</pub-id>
          <pub-id pub-id-type="pmid">33173702</pub-id>
          <pub-id pub-id-type="pmcid">PMC7647513</pub-id>
        </nlm-citation>
      </ref>
      <ref id="B49">
        <label>49</label>
        <nlm-citation publication-type="journal">
          <person-group person-group-type="author">
            <name>
              <surname>Dhawan</surname>
              <given-names>R</given-names>
            </name>
            <name>
              <surname>Brooks</surname>
              <given-names>KD</given-names>
            </name>
            <name>
              <surname>Shauly</surname>
              <given-names>O</given-names>
            </name>
            <name>
              <surname>Shay</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>Losken</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Ethical considerations for generative artificial intelligence in plastic surgery</article-title>
          <source>Plast Reconstr Surg Glob Open</source>
          <year>2025</year>
          <volume>13</volume>
          <fpage>e6825</fpage>
          <pub-id pub-id-type="doi">10.1097/gox.0000000000006825</pub-id>
          <pub-id pub-id-type="pmid">40469554</pub-id>
          <pub-id pub-id-type="pmcid">PMC12133144</pub-id>
        </nlm-citation>
      </ref>
    </ref-list>
  </back>
</article>