Skip to main content
Markkula Center for Applied Ethics

Addressing the Ethics of Artificial Intelligence

MCAE has joined the Partnership on AI.

Irina Raicu

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.  Views are her own.

The Marrkula Center for Applied Ethics has joined the Partnership on AI to Benefit People and Society. The announcement came last week, on the day when the Center hosted a panel discussion titled “AI: Ethical Challenges and a Fast-Approaching Future.” In fact, Assistant Director of Campus Ethics Brian Green, who was one of the panelists, participated in that event via the internet—since he was in Berlin for the partnership’s new members’ meeting.

The members of the Partnership on AI now include corporations (like Microsoft, Google, Apple, Facebook, Amazon, and more), advocacy and civil society groups (the ACLU, Amnesty International, the Electronic Frontier Foundation, and others), professional groups such as the ACM (Association for Computing Machinery), and academic centers such as MCAE, the Oxford Internet Institute, and the Center for Information Technology Policy at Princeton. As Brian Green points out, “the driving force bringing all the very diverse partners together” is an ethical ideal: “the hope that AI can be used for great good, and that AI's evil uses can be avoided.”

There is a lot of work to be done toward those goals. To that end, Brian’s presentation at the recent panel addressed nine areas of ethical concern surrounding the development and implementation of AI:

  • Technical Safety (failure, hacking, etc.)
  • Transparency and Privacy
  • Malicious Use & Capacity for Evil
  • Beneficial Use & Capacity for Good
  • Bias (in data, training sets, etc.)
  • Unemployment / Lack of Purpose & Meaning
  • Growing Socio-Economic Inequality
  • Moral De-Skilling & Debility
  • AI Personhood / “Robot Rights”

When asked, during the panel’s Q&A section, which of those he would prioritize or focus on the most, Brian highlighted the area of moral de-skilling—a subject addressed at length, as well, by our colleague Shannon Vallor in her book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting.

Many questions from the audience also focused on the more immediate concerns of AI-driven unemployment and the related increase in socio-economic inequality. A recent article in New Yorker notes the reticence of many AI developers and businesses to engage with these issues. It quotes a researcher asking “What is something that people do now that robots might do?” and then points out that “[c]orporate executives want to know the answer to that question, but they seldom ask it publicly. Automation is a topic that gets treated with enormous diplomacy, both in Europe and in the United States.” The article also quotes the CEO of a company that automates warehouse jobs, noting that he “was reluctant to talk about [his company’s] customers, who aren’t keen to draw attention to their interest in nearly human-free warehouse systems. ‘There is some sensitivity, given our… political situation,’ he said. ‘It’s just a reality of the times that we live in.’”

The reality of the times we live in, however, is that we cannot keep treating any AI-related concerns with “enormous diplomacy” and “sensitivity”—at least not if those terms are euphemisms for keeping people in the dark as to the implications and consequences of automation, both good and bad.

Another new member of the Partnership on AI, the AI Now Institute, recently released its 2017 report, which leads with ten concrete recommendations aimed at ensuring “that the benefits of AI will be shared broadly, and that risk can be identified and mitigated.”  As the authors of the report explain,

AI companies promise that the technologies they create can automate the toil of repetitive work, identify subtle behavioral patterns and much more. However, the analysis and understanding of Artificial Intelligence should not be limited to its technical capabilities. The design and implementation of this next generation of computational tools presents deep normative and ethical challenges for our existing social, economic, and political relationships and institutions, and these changes are already under way. … We must ask how broader phenomena like widening inequality, an intensification of concentrated geopolitical power and populist political movements will shape and be shaped by the development and application of AI technologies.

In China, a CEO proudly shows a New Yorker journalist a PowerPoint slide titled “The future: ‘Dark Factory’” and tells her, ‘You don’t need workers, you turn off the lights… Only when an American journalist comes in we turn on the light.” In Silicon Valley, a business professor tells his audience that AI will be pervasive—“like electricity.” In Puerto Rico, though, weeks after hurricane Maria, most people are living in darkness because electricity has yet to be restored. 

If it is to live up to its name, the Partnership on AI to Benefit People and Society needs to shine a light on all of the issues listed in Brian Green’s recent presentation at Santa Clara University. It needs to be a place where those issues are addressed freely, with “enormous” clarity and bluntness.

Illustration from pixabay.com, used under a Creative Commons license.

Nov 6, 2017
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: