Skip to main content
Markkula Center for Applied Ethics

Social Robots, AI, and Ethics

The RP-7 remote presence robotic system at St. Joseph Mercy Hospital in Pontiac, Mich. makes the hospital's specialists available around the clock to any hospital in the state. (AP Photo/Paul Sancya)

The RP-7 remote presence robotic system at St. Joseph Mercy Hospital in Pontiac, Mich. makes the hospital's specialists available around the clock to any hospital in the state. (AP Photo/Paul Sancya)

How to shape the future of AI for the good

Brian Green

Currently the world is rapidly developing robotic and artificial intelligence (AI) technologies. These technologies offer enormous potential benefits, yet there are also drawbacks and dangers. Using the Ethics Center’s Framework for Ethical Decision Making, we can consider some of the ethical issues involved with Robots and AI.

The Utilitarian Approach

Utilitarianism is a form of moral reasoning which emphasizes the consequences of actions. Typically it tries to maximize happiness and minimize suffering, though there are other ways to use utilitarian evaluation such as cost-benefit analysis.

On the benefit side of the equation, because robots and AI are great for doing work that is boring, dirty, or dangerous, when employed they often improve the cost-benefit analysis. For example, many assembly-line jobs around the world have replaced human workers with robots, which often enhances worker safety and helps avoid repetitive motion injuries. Over time these robot replacements may also save factories money, raising corporate profits or lowering the prices of goods produced. Other places where robots and AI might improve the cost-benefit analysis may include medical diagnostics, “big data” analytics, robots to help care for the elderly, and lethal autonomous weapons systems in war.

On the cost side of the equation, for each of the above examples robots and AI have a downside. They threaten to take away jobs, separate us from meaningful work, separate us from being able to understand the data we analyze, leave the elderly isolated from human contact, and, ultimately even threaten our lives, perhaps even driving us extinct. These downside risks are significant and worthy of serious consideration starting before these technologies are implemented.

It is overall beneficial to use robots and AI in these ways? Or do the costs outweigh the benefits? Can we choose to promote some uses of robotics and AI technology while limiting others?

The Rights Approach

Another ethical approach is one which considers rights and duties. How could robots and AI affect human rights? As one example, AI might be assigned to monitoring activities related to human rights abuses—e.g., satellite photos, social media postings, news stories, purchases, etc. If certain types of behaviors correlate to human rights abuses in ways that humans haven’t previously noticed, or if certain types of patterns form before abuses happen, then AI might give better diagnostic or even predictive power to agencies wishing to protect human rights.

Conversely, robots might become abusers of human rights if they are not programmed to respect humans. Currently no nation fields any lethal autonomous weapons systems (that we know of), though there are lethal drones (which are not autonomous) and autonomous drones (without weapons). However, if these capabilities were merged, robots and AI might come to be human rights abusers.

How can we best utilize robots and AI to protect human rights and not harm them? In what ways can we promote a future in which these technologies are used for better, and not for worse, ends?

The Fairness or Justice Approach

Fairness and justice are concepts centered on giving people what is properly due to them. For example, someone who does something good ought to be rewarded, and someone who does something bad ought to be punished. If instead good is punished and evil rewarded, then very quickly society will experience serious problems.

How might we use robotics and AI to promote justice and fairness?

As one example, in a society which believes that all humans are properly due a certain standard of living (it is just and fair that all have it), robots could help people attain that standard. Robotic and AI technology might provide greater assistance for those who need extra help, for example, those with mental and physical challenges. For elderly people with memory loss, an artificially intelligent robot might be able to help them remember where they left their keys, or where they were going. For those with physical impairments, a robot might be able to help them get out of bed, or call for help if they fall down or need medical treatment.

Unfortunately, robots and AI could also violate justice and fairness. Algorithms for processing large data sets might produce reasonable responses in aggregate, but not for particular individuals. Or the algorithm might be based on erroneous assumptions that lead to unjust outcomes. For example, algorithms purporting to assess a criminal’s risk of re-offending are currently determining sentencing and parole for many prisoners, even though a study by ProPublica found that a popular version was “remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” And the tool was also wrong about African Americans more frequently than whites.  

The Common Good Approach

The common good approach seeks to promote the best outcome for all involved parties (sometimes even including the non-human natural world). How might robotics and AI promote or endanger the common good?

One example might be that artificial intelligence is useful for learning about systems which are too complex for humans to understand well, or are very boring. For example, the internet contains innumerable links between websites. Knowing about these links helps us to understand what things on the web are relatively more popular (because they have more inbound links) and thus enhances the ability to search the web for relevant information. Were humans tasked with directly finding all the links on the internet, the process would not only be incredibly boring, it would also be incredibly inefficient and ineffective. Instead, humans can program computers to find and categorize these links, thus solving the problem. If the program works correctly, then everyone who uses it benefits and overall, hopefully, society benefits.

Similarly, automation of driving will be another huge benefit from AI. Currently, tens of thousands of people die each year in automobile accidents, the vast majority of which are caused by human error. If an autonomous vehicle were even slightly better on average than human drivers, then thousands of lives could be saved each year. Furthermore, drivers might not need to pay attention to the road, and so could work or relax while commuting. People might not even need to own cars anymore, if robotic cars were accessible enough to always come and pick you up whenever you needed them to, immense time and resources could be saved.

Some dangers posed to the common good by robotics and AI would be technologically-induced unemployment. For example, if self-driving cars become so effective that drivers are no longer necessary, then millions of people – ranging from taxi drivers to delivery and long-haul truck drivers would be out of business. Will these unemployed drivers become impoverished? Where will the money come from to provide for them or re-train them for new jobs?

The Virtue Approach

Virtue ethics seeks to promote good character. For example, courage and temperance are virtues; courage gives us the strength to endure hardship and fear for the sake of something good, while temperance gives us the self-control to resist “too much of a good thing.” Some other virtues include charity, kindness, justice, humility, diligence, honesty, integrity, generosity, gratitude, and wisdom.

Related to robots and AI, there is certainly virtue in enduring the hard work required to build such systems in the first place. We can all be thankful to the engineers, scientists, and technologists who have worked so diligently and with such great skill to produce the technological products that we rely on today, and we should also be thankful for those who are working so hard on the good technological products of tomorrow.

On the other hand, every successful technology opens humanity up to further technological dependency. If robots and AI do all the difficult work in the world, while most humans are unemployed (or perhaps just relax), then what would happen if the machines started to make mistakes? People would be out of practice knowing how to work and achieve their goals without machine help. We would be habituated towards the wrong character traits: laziness, gullibility (by trusting our computers too much), lack of skill, and diminished understanding and wisdom.

A problem within that of dependency, is lack of transparency in understanding how algorithms (and, by extension, the world) work. If a robot or AI makes a mistake, will it be able to tell us what went wrong? Will it be able to explain why it made the mistake? And even if it could give us a very good explanation, if that explanation were very complicated, could we even understand it? In such situations we would come to know less about the world and have less control over it than before, and our technology would act to disempower us.

Virtue ethics helps us see the upside of technology for humanity, while keeping us aware that a downside still exists.

Overall, we can see there are many facets to the ethics of robotics and artificial intelligence technologies, and we should think very carefully about how we develop and implement these powerful tools.

 

Brian Green, assistant director of Campus Ethics at the Center, teaches engineering ethics at Santa Clara University.

Dec 16, 2016
--