Page 29 - Read Online
P. 29
Page 117 Koss et al. Art Int Surg. 2025;5:116-25 https://dx.doi.org/10.20517/ais.2024.91
websites. This suggests that patients prefer platforms that offer visual content, human interaction, and relatability.
These findings highlight the importance of guiding patients toward reliable health information sources, such as
healthcare providers, reputable medical websites, and academic literature, to support informed decision making.
Keywords: ChatGPT, gender-affirming surgery, decision making, transgender health, health information sources
INTRODUCTION
Large language models (LLMs), such as ChatGPT, have transformed information access by providing
immediate and comprehensive responses to a wide range of prompts. For transgender and gender-diverse
individuals considering gender-affirming surgery (GAS), having access to accurate, reliable, and easily
understandable information is crucial for making informed decisions about such complex and sensitive
procedures. Patients require a thorough knowledge of their surgical options, potential risks, recovery
processes, and expected outcomes.
LLMs have been shown to be effective in many areas of healthcare delivery. For example, one study found
that 45% of LLM-generated clinical summaries were equivalent to, and 36% were superior to, those of
[1]
medical experts, underscoring their potential to perform at or above expert levels . Additionally, ChatGPT
demonstrated its competency by passing the United States Medical Licensing Examination with a score of
[2]
64.4%, further showcasing its understanding of medical knowledge .
Despite these successes, concerns about misinformation, readability, and the oversimplification of complex
medical concepts persist. A recent qualitative study among health informatics researchers found that while
LLMs significantly benefit patient education, clinical tasks, personalized care, and patient-healthcare
interactions, they also pose risks, such as spreading misinformation, biased decision making, and
[3]
inaccuracies in communication . These limitations are echoed by other researchers, who acknowledge the
potential of LLMs but emphasize the ease with which misinformation can spread due to a lack of
transparency and accountability . Furthermore, the readability of LLM-generated content often exceeds
[4-6]
the average literacy level of U.S. patients, which could impair their ability to understand essential details,
[7]
particularly in a sensitive and personal subject like GAS .
While LLMs have risen to prominence for their broader healthcare applications, their role in specialized
areas, like GAS, remains largely unexplored. GAS is a rapidly evolving field, with the number of procedures
tripling in recent years . This increase in operations has heightened the need for transgender and gender-
[8]
diverse individuals to be able to access accurate and reliable information as they navigate the process. Prior
research has shown that online forums, social media, and medical websites often serve as primary sources of
GAS information for many patients, who frequently seek firsthand experiences and community
support [9-11] . However, there is limited evidence on how this population uses LLMs like ChatGPT and
whether these tools meet the unique needs of transgender and gender-diverse patients considering GAS.
Expanding our understanding of this area is crucial for improving patient education and ensuring that
individuals considering GAS have access to reliable, accurate, and comprehensive information. As the
number of GAS procedures continues to rise, equipping patients with trustworthy resources is key for
enhancing their health outcomes and overall satisfaction with the surgical process. Therefore, we aimed to
quantitatively assess the extent of ChatGPT use among individuals considering GAS and evaluate how it
influences their decision-making process.

