Skip to main content
Markkula Center for Applied Ethics

Assigning Moral Responsibility for Medical Artificial Intelligence and Machine Learning

Christina Morillo/Pexels

This article was originally published in Verdict: Legal Analysis and Commentary from Justia on February 8, 2021.

Charles Binkley (@CharlesBinkleyis the director of bioethics at the Markkula Center of Applied Ethics. Views are his own.

 

Artificial Intelligence (AI) and Machine Learning (ML) systems are being widely implemented for clinical decision support in medicine. Currently used systems include those able to diagnose skin conditionsdetect cancer on CT scans, and identify diseases of the eye. Others under investigation can even perform surgery semi-autonomously. The increasing use of AI/ML for clinical decision support presents important ethical concerns beyond the much-explored issues of data privacy and the potential for bias. Two features of AI/ML in clinical decision making—shared agency between the physician and the machine, and the “explainability” of the machine’s decisions—raise important ethical and legal questions about how to assign responsibility for medical decisions.

Issues Arising From Explainability

As AI/ML medical systems have become more sophisticated, the reasoning behind their decisions has become less able to be explained. This is due to the complexity of the network in evaluating massive amounts of data to make predictions for an individual patient. The how of the system can be explained in terms of the data on which it was trained, the experts used to annotate the data, and the statistical calculations used by the algorithm to arrive at a decision. In contrast, the why behind a specific recommendation for an individual patient may not be possible to explain. Imagine going to the doctor with abdominal pain and weight loss. Some blood tests are done, and after all your information is evaluated by the AI/ML system, it delivers a diagnosis of pancreatic cancer. When asking how the diagnosis was made, the system may “say” that in 99.9% of the available cases, when the results were the same as yours, the diagnosis was pancreatic cancer. That’s merely correlation. When asked why in your particular case a diagnosis of pancreatic cancer was made the reason for your diagnosis cannot be explained. The system cannot provide causation for the diagnosis.

The opacity of the system, the so-called “black box,” is not necessarily an attempt to protect a proprietary algorithm but rather reflects the fact that the decision of the machine cannot be communicated according to the rules of human language, logic, and reasoning. In the setting of AI/ML clinical decision support systems, physicians and patients are presented with diagnoses and treatment plans without any underlying reasoning. Further issues potentially arise in attempting to convert the machine’s output into results that physicians and patients can understand. How would programmers use code to represent the system’s “thinking”? Would the goal be to explain the result in a way that is accurate and comprehensible, or to represent it in a way that is most likely to be convincing to the physician and patient?

Issues Arising From Shared Agency

Physicians utilizing AI/ML medical decision support systems must assume shared agency with these systems. Whether the machine is seen as augmenting, or to some extent replacing, the physician’s decision, the two are sharing in decision making that will affect a patient. One might ask how this differs from peer collaboration—two physicians discussing a difficult case together. One important distinction is that in a collaborative relationship the peer being consulted would be able to explain their reasoning. The AI/ML cannot. As well, the two physicians can enter into a dialogue whereby each can evaluate and critique the other’s assumptions, logic, and conclusions more effectively than can a solitary reasoner. None of this is possible with AI/ML medical support systems as they are currently designed.

Moreover, physicians may be intrinsically or extrinsically influenced to prioritize the system’s decision over their own judgment. This can occur through automation bias. Simply because of the perceived power of the system, physicians may transfer their own epistemic authority—which is based on education, knowledge, and experience—to the system. In addition, the system may have been developed and “taught” by experts in that physician’s specialty thus leading the physician to defer to the machine’s knowledge. There may also be external pressure to adopt the machine’s decisions based on expectations from payers and health systems. In fact, health systems may value AI/ML systems not only for improved quality in patient care, but also for their efficiency and potential to generate additional revenue. As AI/ML systems become more widespread and accepted, it is possible that they will themselves evolve as the legal standard of care. This would have a significant impact on physician judgment in terms of assessing risk and liability, particularly when a physician decides to override the system.

The Resulting Moral Conundrum

The merger of shared agency and the lack of explainability in the AI/ML system creates a moral conundrum for physicians. Physicians have both an ethical and a legal obligation to accept responsibility for the decisions they make. While legal systems may become involved only if harm comes to the patient, morality takes less of a consequentialist approach.

Take for instance disagreements that will inevitably arise between AI/ML systems and physicians. It is unclear how these differences will be resolved. Without any understanding of the reasoning behind the system’s decisions, physicians will be left without a basis for judging whether they or the machine is correct. One recommended system of adjudication would be to involve a peer as a “tie-breaker.” One obvious weakness is that a peer is not always available to resolve these conflicts. As well, a peer may harbor bias either for or against the machine, further complicating their opinion. Ultimately, the physician will have to decide one way or the other.

Say the physician overrides the system, and the system was in error. The system says you have pancreatic cancer, and the physician says you have pancreatitis. With the former you have a less than a five percent chance of being alive in five years, whereas with the latter you’ll recover in a few days. No harm comes to you since the physician prevailed, but trust between the physician and the system, as well as between you and the system is eroded. This is magnified by the fact that neither the physician nor you can understand why the system arrived at its conclusion. You and the physician might also ask what could the system have “seen” that led to the diagnosis?

While some systems may “learn” from being incorrect, the physician and the patient may view the experience as traumatic and stressful—a “near miss” where harm was narrowly avoided. This may in turn affect the weight the physician places on the AI/ML system’s conclusions in the future. For a system that is continuously learning and thus presumably improving with each decision, this could lead the physician to discount the system’s decisions solely based on a previous bad experience. Thus the physician may not be justified in their bias against the system for a past error.

What if the physician prescribes a treatment contrary to the system and harms the patient as a result? Legal scholars argue that since tort law privileges the standard of care, such that even if the patient is harmed, if the standard of care was followed, the physician would likely not be liable. This is true regardless of the machine’s recommendation. What is less clear is how adopting an AI/ML system as the standard of care would affect physician decision making, particularly if they disagreed with the system’s decision. Again, legal scholars argue that liability would be guided by the standard of care such that even if the physician disagreed with the system, they would feel compelled to follow the machine’s decision for fear of legal liability if the patient were harmed.

The moral calculus in both cases, however, would consider not only whether the physician followed accepted standards but also whether the patient was adequately informed and involved in the decision-making process. Tasking physicians with presenting to patients decisions that have been made by unexplainable AI/ML systems places an untenable moral burden on physicians. It is unethical for a physician to speculatively interpret the system’s output for the patient. It constitutes a breach of the physician’s responsibility to be truthful and also disregards the physician’s own professional autonomy.

Even more outrageous is asking a patient to weigh a physician’s recommendation against an AI/ML system’s decision when only the physician’s recommendation can be explained. Doing so shows a gross disregard both for the patient’s autonomy and also the physician’s moral obligation to assure that the patient is adequately informed.

Possible Resolutions of the Conundrum

The two most direct ways of resolving the conundrum of moral responsibility in AI/ML clinical decision support systems are to either make the decisions explainable or to assign moral agency to the machines. While in the long run both of these options might become reality, in the near future they seem unlikely. A third option would be to first validate the accuracy of AI/ML systems, then subject them to clinical trials to prove their clinical benefit. In doing so, there would be an empirical basis for adopting these systems as the standard of care. This would raise the stature of the machine such that even if not completely understood, they could be better trusted by both physician and patient. This solution would also provide physicians with a firmer ethical foundation for accepting the machine’s decisions. If the machine’s recommendation were to be rejected, there would need to be a solid ethical and legal justification. For instance, a patient might reject the machine’s recommendation after being fully informed of all their options because a different treatment was more aligned with the patient’s values.

AI/ML clinical decision support tools clearly complicate the standard physician-relationship. However, they also hold great promise for improving health care. Importantly, they must be deployed with attention to both accuracy and ethics. While these systems may cause shifts in health care, the need for moral and legal accountability is unlikely to change.

Feb 12, 2021
--

Subscribe to Ethics Center Blogs

* indicates required
Subscribe me to the following blogs:

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.