Skip to main content
Markkula Center for Applied Ethics

Robot Morality

Can a machine have a conscience?

Brian Green

As part of the Markkula Center's yearlong series of talks on conscience, George Lucas of the Naval Postgraduate School in Monterey, formerly of the US Naval Academy, and previously a professor of philosophy at Santa Clara University, came to campus to discuss the ethics of giving autonomous military robots the authority to kill. Here is a brief summary of what he said.

Lucas answered the title question of the talk "Can a machine have a conscience?" with a definite "no." But Lucas then asked what he thinks is a much better question: "Can a robot be designed so as to comply with the laws of war and the demands of morality?" To this more nuanced question Lucas answered "maybe."

Beginning with a discussion of the current use of military drones, which are piloted by humans and can only fire by human authorization, Lucas moved on to what he thought would be more likely scenarios for autonomous robots. He proposed a robotic submarine on an intelligence, surveillance, and reconnaissance (ISR) mission encountering warships that might attack it. What would it do?

Lucas emphasized that even human commanders in military situations often must contact leadership in order to know how to react or to authorize the use of force. Military robots in the same situations would likewise need to seek authorization to perform the same actions. It is highly unlikely that robots would be given any more authority than human commanders in the field have.

That simple situation is quite like what one might encounter on the ground in Afghanistan. But Lucas emphasized that while philosophers love the difficult and borderline cases, engineers and scientists usually first work from what is easy, and only later move towards what is hard. Autonomous robots will be the same. Autonomous robots cannot technically handle situations like ground fighting in Afghanistan and so they will not be used in such situations. Autonomous robots will be used in situations where they are appropriate, in situations that they can handle, like reconnaissance.

But could we get a robot to the level of an Army private someday, to be equal to a human in judgment of legal and moral military actions? Perhaps. Humans already make mistakes in war. Robots are more harm-tolerant than humans and so can risk not responding to aggression. Robots also would be good for patrolling or defending simple areas, such as off-limits zones where people are generally not allowed to go.

Finally, Lucas asked whether autonomous robots might someday actually be better than humans at judging the moral and legal aspects of wartime situations. To this, Lucas answered that if this is ever the case, then it would seem that we should follow this course of action. After all, robots are never angry, nor racist, nor politically-minded, they just follow rules, and if the rules are sophisticated enough, then they might actually be more moral at war than humans.

Lucas tempered this technically optimistic viewpoint in one of his last responses to audience questions, where he reiterated that autonomous military robots with lethal capabilities do not yet exist, they are not allowed under the current guidelines set by the Department of Defense (robots may be either autonomous or lethal, but not both), and furthermore that strong artificial intelligences are likely to be unpredictable. He certainly did not want to give his Roomba vacuum a pistol, and nobody else wants that either.

Brian Green is assistant director of Campus Ethics Programs at the Markkula Center for Applied Ethics.

Oct 1, 2013
--