Page 97 - Read Online
P. 97

Page 424                                                          Brochu et al. Art Int Surg 2024;4:411-26  https://dx.doi.org/10.20517/ais.2024.61

































                Figure 4. The comprehensiveness of ChatGPT’s responses, which were categorized as less comprehensive, more comprehensive,
                equally comprehensive, or different.


               ChatGPT can be a tool for patient education. However, in our opinion, it should not be used as a substitute
               for the initial consultation, a point that ChatGPT makes in nearly all its responses. The consistency of
               ChatGPT’s responses across procedures makes it easier to be recommended as a tool for patient education,
               as the content of the responses is likely to be similar for corresponding questions. The questions in this
               study were all similar in style. This could explain the similarities in responses based on what is known about
               ChatGPT from previous studies . Future investigation may ask the same question with different phrasing
                                          [10]
               and health literacy, on different days, and with different OpenAI accounts to see if the consistency remains.


               While the current iteration of AI holds promise, it cannot be fully recommended due to the risk of
               disseminating inaccurate information. Although this issue was not observed in our study, it has been seen in
               previous literature and remains an important consideration when using ChatGPT for patient education .
                                                                                                        [5]
               Inaccurate information could cause unnecessary stress in patients or lead to an incomplete understanding of
               their procedures. It is important to exercise caution when using ChatGPT for any advice, especially medical
               advice. Clarification and validation of this information with a medical professional is always necessary.
               Additionally, the use of ChatGPT in patient consultation presents another ethical consideration in the realm
               of patient privacy. By asking patients to discuss their medical conditions and procedures with ChatGPT, we
               would be encouraging patients to reveal their data to an unprotected and relatively unregulated service.
               Some studies have suggested providing ChatGPT with vignettes to facilitate more specific responses.
               However, asking patients to do this with their own medical information would further violate their privacy.
               It is important that patients are made aware of these potential risks if physicians advise their use.


               Limitations
               Since this study was conducted on the same day under the same account, no conclusions can be made about
               ChatGPT’s responses across accounts and on different days. Additionally, this study was conducted using
               ChatGPT-3.5, which has been shown to do worse on boards than ChatGPT-4; however, since ChatGPT-3.5
               is the free version, this is more likely used by patients to ask their preoperative questions . Since ChatGPT-
                                                                                         [16]
   92   93   94   95   96   97   98   99   100   101   102