Skip to main content
Markkula Center for Applied Ethics

Should AI Require Societal Informed Consent?

Silhouettes of people gathered in discussion. Photo by geralt_Pixabay.

Silhouettes of people gathered in discussion. Photo by geralt_Pixabay.

Brian Green

Brian Green is director of technology ethics and co-author of  “Ethics in the Age of Disruptive Technologies: An Operational Roadmap” (The ITEC Handbook). Views are his own.

This article, "Should AI Require Societal Informed Consent?" originally appeared on TDWI.org in November, 2023. Copyright 2023 by TDWI, a division of 1105 Media, Inc. Reprinted by permission of TDWI. Visit TDWI.org for more information.

 

Nobody asks bystanders to sign a consent form before they get hit by a self-driving car. The car just hits them. The driver had to sign consent forms to purchase their car, letting the corporation off the hook for much of what goes wrong. However, the driver -- perhaps the most likely person to be killed by it -- never secures the consent of all the people exposed to that vehicle; these innocent bystanders get no say in whether they agree to be exposed to possible harm.

Informed consent is a core concept holding together the rule-based international order. If you sign a contract, then you are legally bound to its terms. If you undergo a medical procedure, you read the forms and sign your name, absolving medical practitioners from liability. If you get an app from the App Store, you sign a user license agreement that protects the app developer, not you. 

However, if you create a new piece of technology that might endanger, harm, or kill people, there is no consent form for the public to sign. We accept that risk despite the logical inconsistency. Why?

The concept of societal informed consent has been discussed in engineering ethics literature for more than a decade, and yet the idea has not found its way into society, where the average person goes about their day assuming that technology is generally helpful and not too risky.

In most cases, technology is generally helpful and not too risky, but not in all cases. As artificial intelligence grows more powerful and is applied to more new fields (many of which may be inappropriate), these cases will multiply. How will technology producers know when their technologies are not wanted if they never ask the public?

Giving a detailed consent form to everyone in the U.S., for example, is incredibly impractical. One of the characteristics of a representative democracy is that -- at least in theory -- our elected officials are looking out for the well-being of the public. Certainly, we can think of innumerable issues where the government is already doing this work: foreign policy, education, crime, and so on. 

It is time for the government and the public to have a new conversation, one about technology -- specifically artificial intelligence. In the past we’ve always given technology the benefit of the doubt; tech was “innocent until proven guilty” and a long-time familiar phrase in and around Silicon Valley has been “it’s better to ask forgiveness, not permission.” We no longer live in that world.

Interestingly, in light of cases such as Theranos, FTX, and Silicon Valley Bank, it is the tech leaders themselves who are pushing this conversation on risk, many focusing on long-term “runaway” AI risk, as many movies have depicted. Certainly, the government should act to figure out how to avoid these doomsday scenarios. Society certainly does not consent to that, and the government clearly ought to try to prevent such risks to society.

Short of the doomsday scenario, though, there are other technological changes to which people may or may not consent. Should we, as a society, let AI in social media act as a weapon of social-psychological mass destruction, spreading misinformation, propaganda, and more? Should we, as a society, use AI in cars, knowing that occasionally they will kill bystanders? Should we, as a society, use AI in medicine, knowing that it may allow patients to die? If medical professionals might ask for consent from the patient for the use of AI in some but not all cases, how do we decide which ones? 

Someone will decide, and it's most likely to be the technology-producer’s corporate lawyers. They will assuredly not have the best interests of the public in their hearts as they write consent forms for users (not everyone else) which place all risk upon the user and none upon the technology-producer. Bystanders be damned. The rest of society and their conception of what the world should properly look like never enters the realm of consideration.

Society needs to have a conversation about technology. We are already having this conversation in fragmented form, in many localities, but it needs to be a society-wide conversation because all of society is at stake. No one gets to live in peace, unaffected by these new technologies. We can’t escape either, whether it is a fever-dream doomsday scenario, our neighbor becoming radicalized by social media, or a self-driving car hitting a pedestrian.

Let’s have this conversation as a society and work together to decide what kind of future we all want.

Feb 13, 2024
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: