Page 85 - Read Online
P. 85
Page 412 Brochu et al. Art Int Surg 2024;4:411-26 https://dx.doi.org/10.20517/ais.2024.61
questions in a patient/procedure-specific way. Therefore, it is not a substitute for an experienced surgeon
consultation. Further research is needed to assess the reliability of ChatGPT before it can be fully recommended as
an ultimate patient learning tool.
Keywords: Artificial intelligence, plastic surgery, consultation, aesthetic
INTRODUCTION
ChatGPT is a generative artificial intelligence (AI) platform developed by OpenAI. It was trained on
extensive text datasets in multiple languages to be able to generate human-like responses to text-based
input . Since its release, numerous studies have been published on how generative AI could revolutionize
[1]
our lives and improve efficiency in fields such as computer programming, environmental studies, and
medicine . Specifically in healthcare, future iterations of generative AI have the potential to analyze speech
[2,3]
patterns and imaging for early diagnosis of psychiatric illness and cancer, along with the potential to create
[4]
accurate models of biological processes to streamline drug development and testing . The medical field,
which is constantly seeking ways to improve patient outcomes, increase productivity, and enhance patient
satisfaction, was quick to adopt and research this technology. Current research indicates that AI can
perform comparably to humans on board exams and diagnosing patients [3,5,6] . Additionally, studies have
been published on how ChatGPT can be used to enhance research productivity, aid in patient education,
and help with clinical decision making . However, ChatGPT and other AI models may provide inaccurate
[5,6]
or outdated information because they cannot distinguish between reliable and unreliable resources, and
they may also fabricate information entirely. They are additionally sensitive to the phrasing of questions and
[5]
struggle to clarify ambiguous prompts . These findings prompt further exploration into whether ChatGPT
could be a valuable tool to support or even replace certain functions of physicians within the healthcare
system.
One area where ChatGPT’s utility can be further evaluated is in medical consulting and patient education.
Existing studies present mixed results on ChatGPT’s ability to perform this function. While one study
showed that ChatGPT is comparable to physicians in terms of responding based on evidence-based
guidelines, another study showed it to be unreliable in its completeness and accuracy . A third study went
[7,8]
so far as to claim that ChatGPT exceeded physicians in accuracy, completeness, and overall quality . These
[9]
conflicting results underscore the need for additional investigations.
Inconsistent findings could be attributed to variations in ChatGPT itself or in the type of prompts
presented. For example, one study presented ChatGPT with 123 prompts. While they found most of its
answers to be above average, some responses were described as “hazardously incorrect and incomplete”.
This study also showed that the style of the question - whether it demonstrated health literacy, used
negation, or asked a question - significantly influenced the response received by ChatGPT . Another
[10]
review, which analyzes fewer prompts, emphasized ChatGPT’s consistency as one of its strengths over
human physicians . Understanding the contexts in which ChatGPT is reliable is crucial, as variations in its
[11]
accuracy and content could impact its recommendation as a patient education tool.
Due to the elective nature of aesthetic procedures, patient education is essential in deciding if they are
suitable. Past research has consistently shown ChatGPT to be easy to understand and correct when
answering questions in a mock plastic surgery consultation. Discrepancies still exist in its ability to provide
individualized advice. Three articles explore the ability of ChatGPT to answer broad questions related to a
specific plastic surgery procedure in a mock consultation setting for rhinoplasty, abdominoplasty, and