Don Heider (@donheider) is the executive director of the Markkula Center for Applied Ethics at Santa Clara University. Views are his own.
News media in recent days has been filled with stories and opinions on generative artificial intelligence (AI), spurred on primarily by the release of ChatGPT and several programs that create artistic images.
What is generative AI? The term describes algorithms that can be used to create text, images, audio, code, and videos. The biggest concerns raised over generative AI are about ChatGPT, a program that can write a news story or college essay in just a few seconds. There’s also considerable concern over programs such as DALL-E 2 and Midjourney, two of a number of image-creating programs now available.
To simply try to determine whether such programs are ethical or not is a bit of a moot point. As with any technology or tool, it all depends on the use and context. What may be more helpful is a set of questions that help us determine for ourselves what some of the ethical issues are about each program.
When it comes to ChatGPT the most obvious ethical question seems to be, is it OK to have a computer program write an essay or article and then pass it off as your own work?
As CNET has learned recently, the answer for most of us is an emphatic no. But there are other issues as well. ChatGPT specifically and generative AI in general is designed to emulate human thinking. One of those traits it emulates is inference. So ChatGPT takes a lot of information and makes inferences based upon that information. What it does not do is check the accuracy of those inferences. Preliminary research has shown that most generative models are truthful only 25% of the time. ChatGPT says on its opening page that the program has “limited knowledge of world events after 2021.” This means nothing that has happened in the past 14 months would be reflected in whatever the program writes. So the first ethical questions would be; When using text-producing AI, are you disclosing the authorship of the text? And; Are you fact-checking the text being produced?
Another big issue with generative AI is bias. Algorithms are generally written by human beings, thus even when writing code, the beliefs, values, and assumptions of that human are inscribed in the very way the code is written and structured. On top of that, AI as a learning machine is trained by first having large amounts of data fed into it, and that data, say thousands of articles and essays, is often biased. Thus, the AI picks up and replicates those biases. A third question then, is: Are you considering the potential bias of any generative AI you are using? And also; how can you account for that bias?
For the image-creating AI programs, thousands and thousands of images are used to train the program, so there’s a question of bias from those images, but another crucial question is where did those images come from? In most cases, copyrighted images have been used to train the AI, thus even if the images the program produces are unique, they are derivative of other artists’ work (this would apply to text-producing programs as well). What credit and/or compensation is due the creators whose work was used to train the AI? Do such programs pose a threat to the future employment of artists and designers?
Generative AI has also been used to produce videos depicting political leaders or celebrities saying things they never really said. These videos also raise multiple ethical concerns, such as, have the politicians or celebrities given permission for their images to be used? Thus far, the folks who have produced these deep fake videos have claimed protection of parody. But when such videos are used to persuade or mislead, the potential for abuse is more apparent. This technology can be used to defame a famous person by using their face on other actors’ bodies in pornography. When is it ethical to use video of people without their consent?
Generative AI is here now, and already being used widely. It will improve and change over time raising new and interesting questions. For the time being, I have tried to pose a few of the most basic ethical questions applying to this new technology. As we move forward, we also offer up our framework for ethical decision making, which can help you do an analysis of almost any situation, technology, or scenario.