How to Control AI
Brian Patrick Green
Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics.
On Tuesday, February 13th, Santa Clara University hosted Wendell Wallach, scholar at Yale University’s Interdisciplinary Center for Bioethics, senior adviser at The Hastings Center, adviser to the United Nations and the World Economic Forum, and the 2018 SCU Department of Philosophy visiting Austin J. Fagothey, S.J., Chair. The event, titled “Minimizing AI Risks: An Agile Ethical/Legal Model for the International and National Governance of AI,” was co-sponsored by the Department of Philosophy, the High Tech Law Institute, the AI Club at Santa Clara Law, and the Markkula Center for Applied Ethics.
Will artificial intelligence destroy us?
This concerning question has been on the minds of many public figures in the last few years, including Elon Musk and Stephen Hawking, while others such as Bill Gates and Vladimir Putin have been more ambivalent, and still others, like Mark Zuckerberg, are downright optimistic.
Wendell Wallach, world-renowned specialist in the ethics of technology, and author of the books A Dangerous Master and Moral Machines, understands the risks raised by those concerned with AI. Many of those discussing AI seem to think it will bring Heaven or Hell, but Wallach does not believe that the future will happen on its own. Instead, what AI brings us will depend – of course – on what we ask it to bring us. Wallach argues that in order to control this technology, we should begin an international governance project that will help control AI and direct it towards good uses and away from evil uses.
Wallach began his talk with a history of the discussion of AI and some of the naïveté with which early thinkers approached the subject. In 1955, some early researchers thought AI would be figured out in a decade. This obviously did not occur. Now, sixty years later, we can finally say that AI is experiencing rapid growth, but it still has immensely far to go in order to emulate human skills, even in such seemingly basic areas as speech and vision.
Along with the bold predictions of AI past, there have also been predictions of superintelligence – a computer intelligence so great that it reduces humanity to cognitive irrelevance. Wallach is skeptical of such predictions because contemporary robots, already so much more sophisticated than their forebears, are still very primitive. He stated that there is “a lot of room for improvement” and that researchers might not get past some barriers with current techniques.
Nevertheless, AI does not need superintelligence in order to cause humanity great problems. Lethal Autonomous Weapon Systems (LAWS or “Killer Robots”) are being developed which threaten to automate the most important life-and-death decisions, placing them beyond human control. Wallach argued that such weapons should be banned so that anyone who develops them would be in violation of international law.
Thinking philosophically, Wallach then pointed out that LAWS are really just a special case of autonomous systems in general; all autonomous systems undermine human agency. And while having human agency replaced in a factory or in a self-driving car might seem fine, in cases of greater importance human decision-makers should remain more active; otherwise we make ourselves irrelevant (among other things).
Unemployment is another foreseeable side-effect of automation, and while in the past unemployment due to technological advance has always been temporary, many people think this time will be different. Wallach sees this as a problem of political economy more than just one of technology, but in any case it will have to be addressed or mass unemployment may disrupt entire nations.
Transparency is another area in which contemporary AI technology presents problems. Deep neural networks are “black boxes” where inputs go in and outputs come out, but in between is very obscure. In order to have safe AI, we need forensic ability so that we can figure out what went wrong after accidents or prevent them in the first place. Biased data could also be a serious cause of unintended consequences in AI – the algorithm might be fine, but if the data going in is garbage, garbage answers will come out.
All of these issues make clear the need for governance of AI. This governance should be agile, adaptive, credible, in good faith, participatory, comprehensive, coordinated and, importantly, international. The “piecemeal” approach is inadequate for the new threats of emerging technologies, which are global and potentially catastrophic. Wallach started a group called Building Agile Governance for AI & Robotics or BGI4AI, which he hopes can help promote the formation of organizations for international governance of emerging technologies.
During the question and answer period, Wallach received a question concerning the robot Sophia, which was recently given Saudi Arabian citizenship. Wallach found this to be a bad publicity stunt. Asked about non-lethal autonomous weapons, Wallach replied that the borderline of lethal and non-lethal is hard to define, as is the borderline of autonomous or non-autonomous. The main thing is the need to ban the clear examples of lethal autonomous weapons, and to make them not just illegal, but morally unacceptable. Along those lines, in the next question Wallach was asked how we could expect robots to behave when we can’t get the same good behavior from humans. Wallach’s reply was that we need to try, achieve all the good that we can, and not let the hard cases prevent us from addressing the easy cases.
For the last few questions, Wallach continued to emphasize the pragmatic approach to governance and ethics, and that any movement toward governance is better than none. He expressed fear that we may be too late, but even that fear should not stop us from acting to do the right thing. He argued that we need to address weapons technologies in particular because warfare is the worst danger with these new and exceedingly dangerous technologies. When asked about poverty, Wallach said he hoped this technology will be shared widely (there are initiatives to promote this). Ultimately, however, technology cannot “solve” politics; it is politics that will ultimately promote the well-being of the poor.