Page 33 - Read Online
P. 33

Page 301                                                        Brenac et al. Art Int Surg 2024;4:296-315  https://dx.doi.org/10.20517/ais.2024.49

               comprehensive and coherent answers, yet ChatGPT was limited in providing personalized advice critical for
                                      [23]
               quality patient consultation .
               Regarding cleft lip and palate repairs, Fazilat et al. used paired t-tests to compare ChatGPT-generated
               responses to thirty cleft lip and palate questions with information from four academic and professional
               sources for quality and readability . Eleven plastic surgeons evaluated the comprehensiveness, clarity, and
                                            [31]
               accuracy of the two sources and selected the sources they preferred to create the highest-quality
               information . Twenty-nine non-medical individuals only selected the source they preferred. Plastic
                         [31]
               surgeons scored ChatGPT significantly higher than the academic and professional sources regarding
               comprehensiveness (P < 0.0001) and clarity (P < 0.001) . Additionally, plastic surgeons and non-medical
                                                              [31]
               individuals preferred ChatGPT cleft lip and palate information 60.88% and 60.46% of the time,
                         [31]
               respectively . The number of inaccuracies in ChatGPT and the academic and professional sources were
               similar. Additionally, the readability level of both sources exceeded the 6th recommended by the NIH
               according to the following readability formulas: Flesch-Kincaid Grade Level, Flesch-Kincaid Readability
               Ease, Gunning Fog Index, Simple Measure of Gobbledygook Index, Coleman-Liau Index, Linsear Write
               Formula, and Automated Readability Index . The results of this study highlight ChatGPT’s ability to
                                                      [31]
               produce quality cleft lip and palate information that plastic surgeons and non-medical individuals prefer
                                                                    [31]
               against currently available academic and professional sources . Likewise, in a study by Chaker et al., two
               senior pediatric plastic surgeons qualitatively evaluated the accuracy of ChatGPT-generated cleft lip and
                                                                                                [24]
               palate repair response to common postoperative questions against their expert responses . The two
               pediatric plastic surgeons determined that the accuracy rate of ChatGPT-generated information was 69%
               compared to their expert responses, once again demonstrating that ChatGPT has the potential to generate
                                                                       [24]
               patient education material and can reduce physician workload . Therefore, ChatGPT may be used to
               produce high-quality information for patients across multiple disciplines, though more personalized output
               may be needed.

               Effectiveness of AIVAs in producing patient educational material
               AIVAs utilize NLP to comprehend human speech and provide answers in a conversational form. AIVAs
               have already been used by major technology companies such as IBM to answer customer inquiries without
               human assistance , and in a similar fashion, Boczar et al. evaluated the AIVAs’ ability to respond to plastic
                              [32]
               surgery FAQs . Their study trained AIVAs to accurately answer commonly asked questions to ten
                           [32]
               frequent patient concerns in plastic surgery . Individuals were then asked to complete a Likert scale and
                                                    [32]
               indicate if the AIVA response was correct and to evaluate its potential use as a source of patient-facing
               information . AIVA answered plastic surgery patients’ frequently asked questions correctly 92.3% of the
                         [32]
                                                                                   [32]
               time, while participants believed that only 83.3% of AIVA’s answers were correct . Interestingly, according
               to the Likert scale, patients were neutral when asked if the technology could replace human assistance .
                                                                                                       [32]
               Overall, AIVAs may have a future role in providing accurate information to routine surgical questions,
               though further refinement is necessary before more widespread adoption by providers and acceptance by
               patients.


               PRE AND POSTOPERATIVE ASSESSMENTS
               Breast reconstruction
               Patient satisfaction is a central goal of breast reconstruction, and delivering patient-centered treatment
                                                                                       [16]
               during the reconstructive process can help improve the perception of quality of care . ML methods have
               accordingly enabled more patient-specific care in PRS and other disciplines of surgery [Figure 2]. For
               example, ML has been used to assist clinicians in reconstructive method selection, preoperative planning,
               facilitation of postoperative monitoring, enhancement of patient outcomes, and to decrease hospital
   28   29   30   31   32   33   34   35   36   37   38