Skip to main content
Markkula Center for Applied Ethics

ChatGPT has Revived Interest in Ethics

AI ethics and legal concepts artificial intelligence law and online technology of legal regulations Controlling artificial intelligence technology is a high risk. By Sansert_AdobeStock

AI ethics and legal concepts artificial intelligence law and online technology of legal regulations Controlling artificial intelligence technology is a high risk. By Sansert_AdobeStock

The Irony is That we Haven’t Been Holding Humans to the Same Standard

Ann Skeet

Sansert/AdobeStock

Ann Skeet (@leaderethics) is the senior director of leadership ethics at the Markkula Center for Applied Ethics, and co-author of the Center’sInstitute for Technology, Ethics and Culture (ITEC) handbook, Ethics in the Age of Disruptive Technologies: An Operational Roadmap. Views are her own. 

This article originally appeared on Fortune.com "ChatGPT has Revived Interest in Ethics. The Irony is That we Haven’t Been Holding Humans to the Same Standard."

 

Five years ago, over lunch in Silicon Valley with a well-respected and established corporate board member who continues to serve on multiple boards today, we spoke about putting ethics on the board’s agenda. He told me he would be laughed out of the boardroom for doing so and scolded for wasting everyone’s time. But since the launch of ChatGPT, ethics have taken center stage in the debates around artificial intelligence (AI). What a difference a chatbot makes!

These days, our news feeds offer a steady stream of AI-related headlines, whether it is about the capabilities of this powerful, swiftly developing technology, or the drama associated with the companies building it. Like a bad traffic accident, we cannot look away. Ethicists are observing that a great experiment is being run on society without its consent. Many concerns about AI’s harmful effects have been shared, including its significant negative impact on the environment. And there is plenty of reporting on its amazing upside potential.

I’m not sure we’ve appreciated enough how AI has brought ethics into the spotlight, and with it, leadership accountability.

The AI accountability gap

Ironically, people were not that interested in talking about human ethics for a long time, but they sure are interested in discussing machine ethics now. This is not to say that the launch of ChatGPT alone put ethics on the AI agenda. Solid work in AI ethics has been happening for the past several years, inside of companies, and in the many civil society organizations who have taken up AI ethics or have started to advance it. But ChatGPT made the spotlight brighter–and the drive for creating industry standards stronger.

Engineers and executives alike have been taken with the issue of alignment, of creating artificial intelligence that not only responds to queries as a human would but also aligns with the moral values of its creators. A set of best practices began to emerge even before regulation kicked in, and the pace of regulatory development is accelerating.

Amongst those best practices are notions like the idea that decisions made by AI should be explainable. In a corporate boardroom training session on AI ethics that I was recently part of, one member observed that people are now setting higher standards for what they expect of machines than what they expect of human beings, many of whom never provide an explanation for how, say, a hiring decision is made, nor are even asked to do so.

This is because there is an accountability gap in AI that makes human beings uncomfortable. If a human does something awful, there are typically consequences and a rule of law to govern minimum acceptable behavior by people. But how do we hold machines to account?

The response, thus far, seems to be finding humans to hold accountable when the machines do something we find inherently repulsive.

Ethics are no longer a laughing matter

Amid the recent drama at OpenAI that appears to have been linked to AI safety issues, another Silicon Valley visionary leader, Kurt Vogt, stepped down from his role at the self-driving car company, Cruise, that he founded 10 years ago. Vogt resigned less than a month after Cruise suspended all of its autonomous driving operations due to a string of traffic mishaps.

After 12 cars were involved in traffic incidents in a short time frame, a company’s operations were ground to a halt and its CEO resigned. That’s a relatively low number of incidents to trigger such dramatic responses and it suggests a very tight industry standard is emerging in the self-driving vehicle space, one far more stringent than the regular automotive industry.

Corporate leaders need to settle in for a long stretch of increased accountability to offset the uncertainty that accompanies new technologies as powerful–and potentially lethal–as AI. We are now operating in an era where ethics are part of the conversation and certain AI-related errors will not be tolerated.

In Silicon Valley, there has been an emergent rift between those who want to develop AI and adopt it quickly and those who want to move more judiciously. Some have attempted to square people off in a binary choice between one or the other–innovation or safety.

However, the consuming public seems to be asking for that which ethics has always promised: human flourishing. It is not unreasonable for people to want the advantages of a new technology delivered within a certain set of easily identifiable standards. For executives and board members, ethics are no longer a laughing matter.

Corporate executives and board members need to be sure, therefore, that the companies they guide and oversee are using ethics to guide decisions. Research has already identified the conditions that make it more likely that ethics will be used in companies. It is up to business leaders to be sure those conditions exist, and where they are lacking, create them.

Jan 22, 2024
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: