Skip to main content
Markkula Center for Applied Ethics

The Revolution of Artificial Intelligence within Health Care

Diagram of human brain overlaid with technology and computer imagery.

Diagram of human brain overlaid with technology and computer imagery.

Clarissa Silvers

Clarissa Silvers is majoring in bioengineering with a concentration in biomolecular engineering and a minor in design, innovation, and entrepreneurship. She is also a 2021-22 health care ethics intern at the Markkula Center for Applied Ethics. Views are her own.

 

Artificial intelligence (AI) is revolutionizing health care whether it is in imaging, diagnostics, and workflow optimization in hospitals. Various clinical and health care applications have surfaced to help assess patients’ symptoms and determine a diagnosis. Economists predict tremendous growth in AI applications within the health care industry in the coming years. One analysis predicts that the market size will increase more than 10-fold between 2014 and 2021. But this predicted growth is coupled with ethical challenges including informed consent, algorithmic biases, and data privacy.

AI in healthcare applications complicates informed consent procedures. To what extent are providers obligated to educate their patients about the intricacies of AI? Do providers need to inform their patients about what type of machine learning is being used by the system, what type of data is being inputted into the system, and what biases could be possible given the dataset? Does the provider have to notify the patient that AI is being used at all? These are all questions that need to be considered when using AI in a clinical setting.

It is especially difficult to inform patients when many of these AI programs use “black-box” algorithms. These programs are called “black box” algorithms because even the inventors of these programs do not understand how the software reaches its consensus. This lack of understanding behind AI also leads to issues with informed consent when the medical professionals themselves do not understand how the AI program has reached its decision.

Secondly, AI algorithms must use training data sets that are trustworthy and fair to ensure that the program’s decision-making skills do not facilitate discrimination. Without solid training data sets, the system may contain biases, which can yield detrimental outcomes in a hospital application. The slogan “garbage in, garbage out” has great applicability here. AI makers must minimize biases at all stages of the development process by considering the quality of their data as well as the diversity of their dataset. An example of this is an AI-based clinical decision-support software that helps providers choose the best treatment for their patients with skin cancer. However, the algorithm was trained on predominantly white patients and yielded less accurate—and even inaccurate—results for other populations, especially people of color. These sorts of flawed programs perpetuate discrimination within the health care system against those who are already marginalized.

Lastly, this data collection leads to privacy concerns. This is especially important when it comes to one’s health care records. If doctor/patient confidentiality is eroded, confidence in AI may follow. The potential leak of one’s health care data could affect health insurance premiums, job opportunities, or even personal relationships. This is especially concerning in regards to AI health apps. This patient data is not shared only with doctors but possibly with family and friends. Unlike doctors who are subject to confidentiality, friends and family are not. Likewise, if a patient decides to withdraw their data after AI software analysis, it may be impossible to extract that data from the algorithm without completely destroying the algorithm. These are only some of the many issues in regard to data privacy that come with the use of AI in health care.

There is a lot of promise in AI for clinical use, especially in imaging and diagnostics. The Food and Drug Administration (FDA) has approved around 40 AI-based medical devices already. In April of 2018, IDx-DR was the first product to receive FDA authorization. The product was an AI diagnostic system that provided decisions for detecting certain diabetes-related eye problems without the need for a human to interpret the results. This is only one example of many new products. 

It is forecasted that AI could potentially contribute up to 13.33 trillion EUR to the worldwide economy in 2030 with the most gain in China and North America. This immense expected growth only emphasizes the importance of these ethical issues. Informed consent, algorithmic fairness, and high data protection are key factors that must be considered when building an AI algorithm within health care. Without addressing these issues, mistrust of AI will begin to gain momentum, and the great potential and usefulness of AI may fall through the cracks.

 

Jun 1, 2022
--

Subscribe to Ethics Center Blogs

* indicates required
Subscribe me to the following blogs:

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.