Skip to main content
Markkula Center for Applied Ethics

Human-AI Collaboration in Health Care

Abstract design made of head outlines, lights and abstract design elements on the subject of intelligence, consciousness, logical thinking, mental processes and brain power

Abstract design made of head outlines, lights and abstract design elements on the subject of intelligence, consciousness, logical thinking, mental processes and brain power

Ellie Glenn ’23

Lights of Ideas by CBlue98/CC BY-SA 2.0 (cropped)

Ellie Glenn is a junior majoring in bioengineering and is a 2021-22 health care ethics intern at the Markkula Center for Applied Ethics. Views are her own.

 

Over the past three decades, advancements in and applications of artificial intelligence (AI) in the medical field have begun to revolutionize health care by enhancing clinical decision-making. Compared to human clinicians, AI is capable of processing larger amounts of information and is more precise and accurate in its calculations.

AI-driven clinical decision support system (AI-CDSS) technology assists physicians in diagnosing, developing treatment plans, and flagging potential adverse reactions with medicines for certain conditions or allergies. AI-CDSS operates through a complex algorithm that utilizes machine learning to recognize emerging data patterns, enabling faster and more accurate medical diagnostics. The aim of AI-CDSS applications in health care is to reduce or eliminate medical errors, thereby improving and standardizing the quality of care for all patients.

However, there are several potential limitations associated with this technology that pose barriers and challenges to its deployment in clinical practice. For one, AI-CDSS needs a massive data training set to gain intelligence necessary for presenting trends in patients. Donating information to electronic health records is an area of concern for patients, as their privacy could be threatened by profiling, re-identification, data leaks, or other issues.

Another significant problem relates to the possible perpetuation of biases. All recommendations made by these decision support systems are based on a given set of data that is not guaranteed to represent the patient well. A machine trained on insufficient or subjective data could generate biased or overfitted outcomes, with majority populations more likely to benefit from this advancement because they are more likely to be represented in the training data set.

Additionally, certain health care trends that are rooted in social injustices, like the significantly higher risk of pregnancy-related death associated with black women, can be exacerbated by AI-CDSS. A machine can only process the information it is presented with, and can only offer statistically-based recommendations for patient care, rather than taking into account the systemic injustices and socio-economic conditions that play a role in these realities.

Thus, for AI-CDSS to be implemented successfully, it must include a large data training set that is representative of all patient populations. Data protection must be prioritized along with the elimination of race, socioeconomic status, and other identifying factors that could skew the results in an unjust way.

Before the medical field can advance with AI-driven health care, there are numerous other potential drawbacks and complications to AI-CDSS technology that must be addressed. Awareness of these technological limitations is essential to navigating the role of AI in health care. In this article, I will focus on the issue of shared agency between clinician and machine.

The conflict of shared agency is highlighted by the circumstance of disagreement between the clinician and AI-CDSS. For instance, a surgeon’s intuition may point to one method of treatment, while algorithmic analysis indicates otherwise. In this case, it is unclear whose opinion has more authority or where medical liability would fall in the case of a bad outcome. If the surgeon believes the machine is misguided and proceeds with their preferred course of action, they could be either commended for their professional competence or held liable for malpractice for disregarding the machine’s recommendation, depending on the result. This potential repercussion discourages clinicians from trusting and using AI.

Conversely, the implementation of AI-CDSS also introduces the risk of over-reliance on automated decision aids, a phenomenon known as automation bias. In clinical settings, automation bias is especially heightened by stress and time pressures. This cognitive tendency could lead to human clinicians accepting a decision indicated by the machine without considering if it truly is the best solution for the patient according to their values and goals. As a result, human abilities like abstract thinking and empathy in decision-making–qualities no machine can fully emulate–are sidelined. Likewise, subtle cues to patient conditions that can only be picked up through direct interpersonal interactions, like behaviors and facial expressions, are missed. It is critical that the inclusion of AI in health care does not cripple these unique abilities of human clinicians.

Still, the benefits for health care offered by this technology cannot be overlooked. AI-CDSS is a powerful tool that will undeniably be integrated into future standard medical practice in some shape or form. To properly harness this technology and maximize its positive impact, there must be an established balance between the role and authority of AI and that of clinicians. An ideal cooperative partnership between these parties combines the intuitive and analytical thinking of humans with the computational power of machines, ultimately contributing to just and safe improvements in health care.

Productive human-AI collaboration is therefore shaped by an optimal level of trust. Since user trust in technology directly impacts how much the technology is applied, the current lack of trust in AI must be addressed if the medical field wishes to advance in this arena. In part, the distrust is due to the opaqueness of decisions made by black box algorithms that are too complex to be understood by clinicians. Uncertainty with sophisticated AI technology also translates to job insecurity and fear of losing autonomy over professional decisions. On the other hand, placing too much trust in AI can also be harmful, as displayed by the automation bias. Clinician trust in AI should lie somewhere between these two sentiments to generate a healthy level of reliance on AI when making clinical decisions.

Proper trust levels might start with the understanding that both parties can err. While algorithms are designed to be reliable and accurate, machine computations alone are not sufficient for clinical decision-making. Perhaps future design considerations in the development of AI-CDSS technology can enhance the objectivity, accuracy, and ethical quality of its decision-making abilities, but there will always be limitations to how much we can learn exclusively from computerized data. The role of human critical thinking and intuition remains essential to quality health care. Accordingly, AI should be used as an advisory tool to supplement and improve the care provided by professionals.

This key dynamic can be preserved by meaningful human control of AI systems. AI applications in health care must be strongly and effectively governed, with close technical and ethical oversight. The focus of future development should therefore include: increasing transparency of data systems and ensuring secure patient data collection and storage, increasing explainability of computer operations by incorporating more interactive and visual aspects into the human-software interface, promoting justice by minimizing biases in data sets, and defining accountability and legal liability. The objective of these advancements is to create a clinician-AI health care system that is greater than the sum of its parts.

Jun 21, 2022
--

Subscribe to Ethics Center Blogs

* indicates required
Subscribe me to the following blogs:

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.