Page 79 - Read Online
P. 79
Novotny et al. Art Int Surg 2024;4:376-86 https://dx.doi.org/10.20517/ais.2024.52 Page 382
Figure 1. The structure of AI (adopted and modified from [40] ). AI: Artificial intelligence.
To ensure that AI is accessible and properly used in healthcare, clinicians need to define future healthcare
goals and work closely with computer scientists to develop clinically relevant and interpretable AI
algorithms. Data should be collected robustly, digitized, and made usable for AI. Cost-effectiveness,
security, and privacy frameworks are critical. Companies must use secure and confidential data, and high
ethical standards must be maintained to ensure long-term benefits for healthcare systems. Algorithms must
be validated and evidence of their safety and effectiveness must be widely available. AI results depend on
accurate and unbiased data. Biased data lead to unreliable predictions, especially for underrepresented
groups such as racial minorities and women. Relying solely on AI can compromise patient autonomy. The
human element in healthcare remains essential. Autonomous robotic surgery requires further development,
[49]
so AI is likely to assist the doctor, but not replace them . AI has the potential to relieve doctors of time-
consuming paperwork and other non-medical tasks, such as bureaucratic duties. However, it is important to
recognize the limitations of AI. Firstly, it is not currently possible for AI to completely replace doctors in
diagnosis and decision making. In many cases, AI is currently only used under certain restrictions, and the
results of the algorithms are often merely an association. It is long overdue that general guidelines for the
[50]
use of AI in medicine are developed .
The introduction of AI in plastic surgery raises ethical issues, particularly in relation to the objective
assessment of attractiveness. Discrimination based on ethnicity and gender is possible. AI in plastic surgery
could encourage racial differences and standardize images of people. AI-powered photo-editing applications
create unattainable standards of beauty, which may lead to more cosmetic procedures and mental health
problems. Cosmetic surgeons should consider the mental health of their patients when making decisions
[15]
about surgical procedures .
In principle, the law distinguishes between three categories according to which AI is to be assessed: (1)
responsibility; (2) liability; and (3) culpability. In the case of surgical robots, the responsibility lies with the
developers or the healthcare institution using the technology. If AI causes harm during surgery, the surgeon
or the hospital or even the developer might be held accountable for damages. Culpability is the most
difficult to assess, because it is not yet clear how to attribute blame to an AI system. Unlike humans, these
technologies do not have intent, making it hard to apply traditional legal concepts of culpability. In the
future, surgical robots will perform routine operations under the supervision of a human surgeon. This
poses the same problem as self-driving cars in terms of responsibility and the surgeon’s role. The surgeon
[51]
must be able to intervene quickly at any time and act for the benefit of the patient . This highlights the
complexities in determining legal and ethical responsibility when AI takes on roles traditionally managed by