Skip to main content

Can We Program Self-Driving Cars to Make Ethical Decisions?

Robotic technology is becoming more and more prevalent in our daily lives. No longer isolated in manufacturing plants or relegated only to...
Robotic technology is becoming more and more prevalent in our daily lives. No longer isolated in manufacturing plants or relegated only to vacuuming floors, robotic systems are now helping you find products in stores and driving next to you on the highway. These fantastic capabilities are due to advances in sensing the world, performing advanced reasoning tasks, and interacting with the environment.
 
Reasoning includes decision making, and that means that choices are being selected and acted upon. These choices are encoded into the “brains” of these automated systems through software. Traditionally, these choices have applied to well-understood situations with clear conditions and broad consensus on appropriate results. For example, our cars use anti-lock brakes to avoid skidding and cruise control to maintain speed.
 as robotic technology advances, decision making becomes more sophisticated, with many subtle dimensions 
However, as robotic technology advances, decision making becomes more sophisticated, with many subtle dimensions. Consider Google’s autonomous car and Tesla’s Autopilot system. These cars automatically change speed or turn to avoid collisions. That’s great, but what if the choice is to fatally collide with a wall or swerve onto a sidewalk full of pedestrians? Should the choice be to protect those in the car or minimize the number of fatalities overall?
 
Even in normal driving conditions, there are compelling dilemmas. For example, as these cars drive within their lanes, should they center themselves within the lane? And when a large truck approaches, perhaps the car should move over to lower the risk of collision if something goes wrong—that’s what human drivers often do. But what if you’re also passing a car in your own direction? Should the car be positioned to minimize the overall probability of impact given both the oncoming truck and the vehicle being passed? That seems to make sense. Or perhaps the car should be positioned to minimize the overall damage that could occur, which depends on the risk of impact and the result of the impact if it should occur. Perhaps that makes more sense. Or maybe the car should minimize the financial liability of personal injury cases that might result from a crash. Wait...what?
 
You might say the above scenario is far-fetched, but this is a viable scenario. The technology in these cars considers the trajectory of vehicles and objects, road and weather conditions, and regional traffic conditions. It isn’t a leap to think they couldn’t also determine and consider the year and model of vehicles, the age and race of pedestrians, or even the socioeconomic status of the area.
Rather than shying away from this issue as one fraught with danger, I believe we have an opportunity to be thoughtful and explicit in creating these rules of behavior.
When these cars “make decisions,” these decisions are based on the reasoning criteria we embed within them. Rather than shying away from this issue as one fraught with danger, I believe we have an opportunity to be thoughtful and explicit in creating these rules of behavior. I don’t claim to know now what the answers are or how to encode an ethical framework into software, but what a wonderful opportunity for us to apply our Santa Clara sensibility to an emerging technology that is changing the world.
Engineering, Ethics, Innovation, Technology
Santa Clara, Illuminate
Follow us on Instagram
Follow us on Flickr
Follow us on Linkedin
Follow us on Vimeo
Follow us on Youtube
Share
Share