The 4 key principles of biomedical ethics from a surgical context are autonomy, nonmaleficence, beneficence, and justice.
Implications of fairness and the taxonomy of algorithmic bias in artificial intelligence (AI) system design are important factors in the ethics of AI.
The ethical paradigm shifts as the degree of autonomy in AI agents evolves.
Ethics in AI is dynamic, and continuous revisions are needed as AI evolves.
Surgery manifests in an intense form of practical ethics. The practice of surgery often forces unique ad hoc decisions based on contextual intricacies in the moment, which are not typically captured in broad, top-down, or committee-approved guidelines. Surgical ethics are principled, of course, but also pragmatic. They are also replete with moral contradictions and uncertainties; the introduction of novel technology into this environment can potentially increase those challenges.
A discussion about ethics is often a discussion about choice. Wall et al1 defined an ethical problem as “when an agent must choose between mutually exclusive options, both of which either have equal elements of right and wrong, or are perceived as equally obligatory. The essential element that distinguishes an ethical problem from a tragic situation is the element of choice.” Moreover, choosing between options often involves identifying factors by which those options are not exactly equal, and the method one uses to weigh these factors can draw upon a set of ethical frameworks that, themselves, can be somewhat incongruous.
At their core, artificial intelligence (AI) systems—and machine learning (ML) more specifically—are also designed to make choices, often by categorizing some input among a set of nominal categories. In the past, the choices these systems made could only be evaluated by their correctness—their accuracy in applying the same categorical labels that a human would to previously unseen inputs, like whether or not an image contains a tumor. As these systems are increasingly used in less quixotic (or more critical) scenarios, we are asking of them to make choices for which even humans struggle to find correctness.2,3 Indeed, when software is more accurate than the most correct humans,4,5 but it is validated by labels provided by humans, the very nature of the process—and its application into practice—is called into question. Indeed, we may no longer consider that the human and the machine are in contrast to one another, or even that one simply uses the other; rather, we may consider that surgeons and their tools are in a sense a single, hybrid, active entity. We shape our tools, and thereafter, our tools shape us, as Marshall McLuhan described.
The expectations of surgery and ML are similar in some ways. Both surgeons and ML algorithms are meant to solve complex problems quickly with a dispassionate technical skill, and neither have traditionally been defined by their bedside manner.6 However, surgeons are regularly faced ...