Should we trust algorithmic decision-making?
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.
Santa Clara University professor Shannon Vallor, who is one of the Ethics Center’s Faculty Scholars, recently published an article titled “Artificial Intelligence and Public Trust.” Addressing the way in which our lives are increasingly suffused with—and determined by—algorithmic decision-making, Vallor writes that
[t]his creates an unprecedented ethical imperative for AI researchers, designers, users, and companies and institutions that employ them. Artificial intelligence is immensely powerful, but it is not magic. It does not run without human intelligence—including, even chiefly, our moral intelligence. The future of an AI-driven world depends less upon new breakthroughs in machine learning algorithms and big data than it does upon the choices that humans make in how AI gets integrated into our daily lives and institutions and how its risks and effects are managed.
She adds that “every AI-enabled decision process is still a human responsibility, all the way down to its deepest, darkest, most inscrutable layers.” The article concludes with suggestions for things that can be done to earn the public’s trust in artificial intelligence.
Read it all here.
Illustration by DeeAshley, cropped, used under a Creative Commons license.