Skip to main content
Markkula Center for Applied Ethics

Awake in Academia

close up of fountain with Mission in the background

close up of fountain with Mission in the background

On Academics Addressing the Impact of AI on Society

Irina Raicu

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.

On November 14, The New York Times published an op-ed by Cathy O’Neill (the author of a book titled Weapons of Math Destruction, which should be required reading for anyone living in a democratic society whose workings depend on an informed citizenry). The piece was titled “The Ivory Tower Can’t Keep Ignoring Tech.” The title was provocative, and somewhat bewildering, at least to some of us who read and deal with academics on a regular basis: Had academia been ignoring tech? Not in the circles we knew.

O’Neill argues that

lawmakers desperately need [the impact of AI on society] explained to them in an unbiased way so they can appropriately regulate, and tech companies need to be held accountable for their influence over all elements of our lives. But academics have been asleep at the wheel, leaving the responsibility for this education to well-paid lobbyists and employees who’ve abandoned the academy.

Later in the piece, she adds that

academics who do get close to the big companies in terms of technique get quickly plucked out of academia to work for them, with much higher salaries to boot. That means professors working in computer science and robotics departments--or law schools--often find themselves in situations in which positing any skeptical message about technology could present a professional conflict of interest.

Despite this claimed potential conflict of interest, many of us who read her article could immediately think of quite a few professors who had made it their work not only to research and posit “skeptical” messages about technology, but also to build tools that exposed problematic aspects of tech, train future engineers and data scientists in addressing ethical issues, speak out at public events, and discuss their concerns with, say, the journalists whom O’Neill credits with being “our main source of information on the downside of bad technology.” Journalists have been extremely important in this quest—and they often quote academics in their articles.

After claiming that “academics have been asleep at the wheel,” O’Neill argues for “one solution for the short term. We urgently need an academic institute focused on algorithmic accountability.”

Ironically, the day after the publication of her piece, the AI Now Institute at NYU had its official launch. But the institute had been around for some time—in fact, in October, it had issued its second annual report, with recommendations for regulators, corporations, and academics. Data & Society, another organization that addresses, in a multidisciplinary fashion, some of the issues mentioned by O’Neill, had been around even longer. More recently, the Partnership on AI had brought together corporations, civil rights groups, and—yes—academic centers to address the impact of AI on society.  (The Ethics Center is a member of the Partnership.) Academics are also deeply involved in the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (and its publication “Ethically Aligned Design”).

In October, the IEEE’s TechEthics conference program included panels titled “The Social and Personal Impacts of AI” and “Influencing the Next Generation of Engineers through Ethics Education.” Academics from Virginia Tech, North Carolina State University, the University of Virginia, and U.C. Berkeley participated in those. In regard to regulators, back in August, an article on Nextgov.com was titled “Agencies Should Watch Out for Unethical AI, Report Warns.” The report cited was issued by Harvard’s Ash Center for Democratic Governance and Innovation. Academia, in other words.

This is not intended to be an exhaustive list of the efforts taking place in various institutions: just to give some examples of work that went unmentioned in the call for academia to step up.

And of course, for years (actually decades), various academics had taught and researched and written about engineering ethics and technology ethics more broadly. None of them made an appearance in the New York Times article. Were they “asleep at the wheel”?

On the day O’Neill’s piece was published, a number of them responded on Twitter and other social media. One example: Nick Seaver, an assistant professor at Tufts University, tweeted “I am late to the party on this off-base op-ed because I am literally traveling for a workshop on the critical study of algorithms, which is meeting for the third year in a row, having run a summer school last year.” A day later, academics who are part of the PERVADE team (“NSF-Funded Pervasive Data Ethics for Computational Research”) issues a collective response titled “We’re Awake—But We’re Not At the Wheel.” They agreed with some of the points and critiques raised by O’Neill, but added,

none of this means academics aren’t trying. Indeed, some of the very solutions O’Neil advocates, including comprehensive ethical education for future engineers and data scientists, are well underway in Information Schools, computer science programs, and statistics departments across the country. Undergraduate and graduate programs in each of our home institutions absolutely worry about “how the big data pie gets made,” to use O’Neil’s words.

And today’s students care, too. Partially because of the public acknowledgment brought up by O’Neil’s book…, students in our classrooms are eagerly discussing biased algorithms, big data surveillance, and tech ethics.

This is the case at Santa Clara University, too—in a variety of undergraduate classes, as well as in the graduate schools of engineering, law, and business.

Many (if not all?) academics who work on issues of algorithmic fairness and other AI-related problems respect and acknowledge O’Neill’s work. But O’Neill’s op-ed ended with the lines “The good news is that a lot could be explained and clarified by professional and uncompromised thinkers who are protected within the walls of academia with freedom of academic inquiry and expression. If only they would scrutinize the big tech firms rather than stand by waiting to be hired.”

In fact, the good news is that, increasingly, and for some time now, many have been doing just that.

Some statements, like some algorithms, are applied too broadly—and unfair.

 

Photo by Russ Morris, cropped, used under a Creative Commons license.

Nov 17, 2017
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: