Skip to main content

Adding "E" to "AI"

abstract image for AI ethics

abstract image for AI ethics

The expanding discussion of ethics and artificial intelligence

Irina Raicu

As part of an ongoing collaboration between the ethics center and The Tech Museum of Innovation, my colleague Brian Green, the assistant director of our Campus Ethics program, recently wrote a brief introduction to the ethics of social robots and artificial intelligence. That document and links to other materials about ethics and AI are collected and presented on the web page for the museum’s “Social Robots” exhibit. Brian’s essay, subtitled “How to shape the future of AI for the good,” is meant as a broad overview of the issue, and directed at a very diverse audience (including a variety of ages). It uses the center’s “Framework for Ethical Decision Making” to look at the ethical issues presented by AI, through five ethical lenses: utilitarianism, rights, fairness (or justice), common good, and virtue ethics. In the process, it asks a number of key questions—among them, “How might robotics and AI promote or endanger the common good?”

At the other end of the spectrum of recent efforts to address the ethics of AI, aimed at an expert audience, is a report titled “Ethically Aligned Design”—released last month by the IEEE (The Institute of Electrical and Electronics Engineers) as part of its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. As its authors note, the report “represents the collective input of over one hundred global thought leaders in the fields of Artificial Intelligence, law and ethics, philosophy, and policy from the realms of academia, science, and the government and corporate sectors” (among them, Santa Clara University professor and Ethics Center scholar Shannon Vallor). At 138 pages long, the report includes sections such as “Embedding Values into Autonomous Intelligent Systems,” “Methodologies to Guide Ethical Research and Design,” and “Personal Data and Individual Access Control.” It invokes eudaimonia and speaks of prioritizing “the increase of human wellbeing as our metric for progress in the algorithmic age.” The report is released as “Version 1,” and its authors invite public input: The deadline for submitting public comments is March 6, and details are included in submission guidelines.

New industry efforts are also tackling the issue of AI ethics: last September, for example, The New York Times reported on discussions among Alphabet, Amazon, Facebook, IBM, and Microsoft aimed at creating a new industry group. “The specifics of what the industry group will do or say—even its name—have yet to be hashed out,” wrote the Times. “But the basic intention is clear: to ensure that A.I. research is focused on benefiting people, not hurting them…” Later that month, an article in The Guardian added that the group would be called “Partnership on Artificial Intelligence to Benefit People and Society,” but noted that Apple and OpenAI (a nonprofit funded in part by Tesla’s Elon Musk) had not joined the group. The Guardian article also pointed out that in 2014, when Google acquired the company DeepMind, “part of the acquisition deal saw Google promise to form an AI ethics board to ensure the new technology was not abused. Two-and-a-half years on, however, and it is unclear whether the board has ever met, or even who is on it. DeepMind has regularly declined to comment on it...” It remains to be seen whether the new industry partnership will be more inclined to share with the public information about its efforts, and even invite public comments, as the IEEE’s Global Initiative has.

In the meantime, there are also efforts to increase the coverage of ethics in the training of the engineers who will create the next iterations of AI. At Santa Clara University, for example, the School of Engineering offers a number of ethics courses—some of them taught by Brian Green, the professor quoted at the beginning of this post. In addition, the ethics center’s website proudly hosts the “Introduction to Software Engineering Ethics” module written by Vallor, who also teaches in the School of Engineering; this resource, free and available to the public, has been used, to date, at more than 70 colleges and universities across the U.S. and in 16 other countries around the world.

Here’s hoping that some of the students who encounter the “Social Robots” exhibit at the Tech In San Jose will be inspired to become engineers, and that the engineering students who have tackled the case studies and other materials in the module will soon be involved in worldwide industry and professional organization efforts to add the “E” to “AI.”

 

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.  Views are her own.

Photo by readerwalker, used without modification under a Creative Commons license.

Jan 6, 2017

Subscribe to Internet Ethics: Views From Silicon Valley

Delivered by FeedBurner

Internet Ethics Stories