Skip to main content
Markkula Center for Applied Ethics

On Data Ethics: An Interview with Mark Nelson

Portrait

Portrait

"Some ethical questions have to be addressed by more structural means."

Jeff Kampfe

Welcome to "Cookies, Coffee, and Data Ethics"! My name is Jeff Kampfe and I am a senior at Santa Clara University, studying Economics and Philosophy. I am also a Hackworth Fellow at the Markkula Center for Applied Ethics. This article is the sixth in a series of interviews that involve hot coffee, tasty baked goods, and the complex issue of data ethics. The goal of these interviews is to see what individuals with unique and insightful viewpoints can teach us about the field data ethics, where it is heading, and what challenges we may face along the way. Thanks for stopping by!

The following is an edited transcript of a conversation with researcher Mark Nelson.

Mark Nelson founded and co-directs the Peace Innovation Lab at Stanford University. He focuses on designing, catalyzing, incentivizing, and generating resources to scale up collective positive human behavior. He teaches the design of technology interventions that measurably increase positive, mutually beneficial engagement across difference boundaries. Nelson is also a member of the Stanford Behavior Design Lab (formerly the Persuasive Tech Lab), and of Stanford’s Kozmetsky Global Collaboratory.

Can you tell me a bit about how you got involved at the intersection of data technology and human behaviors?

The Peace Innovation Lab started as a project in BJ Fogg’s Persuasive Technology Lab. Back in the early ‘90’s two of his advisors, Byron Reeves and Cliff Nass, had noticed that the moment you put sensors and actuators on technology to make it interactive, as opposed to other “dumb” broadcast media, human beings start treating it as an independent social agent. We treat it as an entity instead of just a thing.

Because humans have internal states that are incredibly variable, depending on if I just had the flu or if I haven't had my coffee yet, all of those things can affect our internal states. So as human beings, especially for the people we are closest to, we have cognitive and emotional filters that allow us to discount the states of those around us. We do this especially with the people closest to us. If your partner wakes up and gives you a hard time, you can realize they just didn’t sleep well. This is not like the end of your marriage or something it's just they didn't sleep well.

By contrast, when we’re interacting with technology, our experiences are remarkably flatlined. The experience is really uniform because our filters never get triggered by technology. The result is, technology can influence us in ways that other humans never can. We wouldn’t allow the people closest to us in our lives to influence us the way we allow technology to influence us every day. Additionally, so much of this influence is unconscious.  So Byron Reeves and Cliff Nass realized technology is now a persuasive actor and we need to start studying in that way. And what makes this technology so unique is that now a 12 year old kid in their bed room over the weekend can code up an app that changes the behavior of millions of people in a couple weeks.

That's a power that our species has never had before. So for me a whole bunch of the ethical issues start there. Then we also know that both companies and governments are going to abuse data technologies in all sorts of horrible ways; that seems to be pretty much unavoidable. Their idea is that “If we don't do use our enemies will use it so then we must use it.” But can we at least try to also use data technology for good. So BJ Fogg started looking into how this could be applied to world peace.

That’s where I start to come into the picture. What we began to end up with, in terms of persuasive tech in general, is personally identifiable behavior data as an industrial byproduct. Almost everything we do when we're trying to teach people to design persuasive tech for positive social purposes is working with human behavior data at a really high resolution level that's never been possible before.

From a science and research perspective, it’s incredible. If you look out the window you can probably see infrastructure on which our civilization depends. There’s transport networks, there's power networks, there's communication networks, and civilization as we know it wouldn't function without them. The interesting thing is that between you and me there’s this relationship that’s invisible. All that other infrastructure and our ability to build all these other networks, depends on our relationships. Human relationships are the underlying invisible infrastructure that makes all other infrastructure possible. But because these relationships are invisible, it’s difficult to know if we are seeing the same things.

A large part of  what we're trying to do in our lab is make that underlying relational infrastructure visible. First of all so the world can see it and the people that are building it can get the proper recognition for what their building. Second to make it investible. If we were able to look at our relationships the same way we could look outside at our infrastructure, and say “Hey, that pothole needs to be fixed,” the investment decisions for human relationships, especially commercial relationships, could quickly become massively improved.

The best way that I know to model these relationships is individual personal episodes of engagement behavior--because if we wanted to know something about the relationship between two groups of people we could just aggregate the individual data of the people within each group. Then if we could do comparative analysis, especially in real time, if we want good outcomes we would try to get each group to do this and this to each other. All of these human interactions start to become empirical, which has never been possible within our species.

What are some of the virtues or values that make a good data scientist? What do those look like in practice?

To some degree it is impossible to predict in advance what unintended consequences might happen with employment of technology. One of the key things you need is a way to systematically minimize harm. What we've been doing is applying the best practices from the last time our species got a new superpower from technology, which was aviation. To make this operational in the most ethical way possible, you need the equivalent of a pilot’s checklist for technology designers.

For example, we have wonderful colleagues in the law school and the law departments of other universities that frustrate us to no end because they come to us and write wonderful sounding design principles for ethical peace technology which state “Rule #1 - Do no harm.” This never fails to make all the engineers and designers in the room look up incredulously. So what you're saying as a lawyer is you’re just going to sit there and wait for the engineers to make a mistake and then you're going to punish us.

By comparison, you are essentially looking at the technology design equivalent of the 10 commandments. Historically I would argue that a large chunk of the value of rule sets, if you think of things like religious code or legal code as basically social code about coordinating the behavior of large groups of people, those command sets have had to be prohibitory. “Thou shalt not …”

Whereas by contrast, in an airplane cockpit before takeoff a pilot can say “Flaps down.” They can then look to the co-pilot and the co-pilot can say “Confirmed.” Then they both can flip a switch and have a visual memory of flipping a switch to be on the same page. So the trend of technology design is moving to the opposite of prohibitory design. There are very crisp behaviors that also have a representation that everyone can see, just like the cockpit switch. There's something structural about the ethics here if you reduce it to that level of specificity around behaviors that you must do and the order that you must do them in.

The preflight checklist also is effective because people don’t require a slap on the wrist after something goes wrong. Problems can usually be caught in the act and everyone can recognize them.

So the first structural solution is that you make things more ethical by making them more specific and concrete. The second idea is you make things more ethical by making them more prescriptive instead of prohibitive. The third thing is that you have all sorts of structural elements and memory aids that help you perform these actions in real time. The fourth thing is doing things in the right sequence. There is something profoundly ethical about just doing things in the right order and helping people see things in the right order.

These are all aspects of ethics that I never hear discussed. The assumption is that everyone has to do the right thing, but the problem is that we assume everyone already knows what the right thing to do actually is.

Related to the shift from prohibitive to prescriptive is the shift from punishment to reward. When you're doing human computation of social code, the goal is to reduce the friction of many humans working together. If you look historically about how these codes  developed, whenever we start having large populations living in perhaps a small city on a river in Central Asia, you start to have a coordination problem. The social code that is created is being run in human brains. This is essentially mass parallel human computation being run in a large network.

Human brains as wetware are wired to be threat aware. The human brain is very weak at detecting cause and effect over time. But if it can detect cause and effect, it is far better at detecting it if the effect is negative than if the effect is positive. So the thing that really works is punishment based systems, because human processing is slow and distributed. If I wanted you to remember this year what you shouldn't have done last year, I could punish you this year for what you did last year.

However, now the hardware has changed. We are all carrying around what are essentially supercomputers in our pockets that can receive a signal almost anywhere in the world. We are seeing a fundamental ethical change, where the computation can now run on rewards. If you have sensors deployed in the environment that can detect when the behavior you want happens and you can immediately deploy a reward, rewards can be far more powerful than punishments.

Take Candy Crush for example. The reward of playing the game literally has a half life in seconds. The moment you touch the screen, it triggers a whole cascade of showering candies right in front of you with sparkles and bright colors to trigger the chemistry within your brain. So at the structural level we are seeing a shift towards tiny tiny rewards.

How does this shift towards reward based systems affect the autonomy and decision making choices of individuals?

So many people hear this and think that this is a much better world because we are seeing rewards take precedence over punishments. However, first off, there’s all there's all sorts of ethical chaos as we switch from one system to the other. An example is driving on the road:  There is no “right” side of the road to drive on; what matters is that we all agree on some side of the road to drive on. The vast majority of bad human behavior is unintentional. People don’t usually think about trying to run someone over in their car when they leave for work, it just happens.

A lot of ethical questions aren’t about making people better or doling out punishments. Some ethical questions have to be addressed by more structural means. The pilot's checklist is a far more ethical command set because it has a more precise structure. The goal is to help augment people’s ability to make decisions and help them make the correct decisions in the right context. For me at least, a whole lot of ethics is down at very structural levels.

What are your thoughts on data ownership? What are some of the largest issues that need to be resolved surrounding this topic?

If there are any perfect solutions, I haven’t been able to find them yet.

My focus is much more on how we make it safe for people to share and use the data they have--however they came by it. I think that’s a much more fruitful line of inquiry for society than questions about whose data it is. I am much more concerned about what you might do with data about people, and specifically about how you might use it against them, and in the long run the only way we can address those concerns is to acknowledge that the nature of data is to be distributed, because it can only create value by being compared and exchanged.

The question “Whose data is it?” is profoundly flawed because data isn't like material stuff. It's more intangible. And in economics, there are some useful notions of durable wealth and enduring wealth; data can be a textbook example of enduring wealth. But many of the laws we’re building around data have different ideas embedded in them. I think time will show that those are unworkable.

However, with our new technologies we have to be taking an iterative approach. Nobody knows exactly how things ought to be structured, so we have to run mass parallel experiments. Each of those experiments has to record all of the details (we should be particularly interested in recording the failures), and thus any failures will only harm a small number of people.

What results is a set of professional practices, such as the preflight checklist described earlier. Now with these practices any pilot can fly anywhere in the world, arrive safely, with the safest transportation system we have ever built as a species. Also, a  pilot’s religion, gender, ethnicity, etc. has almost no effect on their ability to operate within that system. And I find that incredibly ethical.

For prior articles in this series, see "On Data Ethics: An Interview with Jacob Metcalf," "On Data Ethics: An Interview with Shannon Vallor," "On Data Ethics: An Interview with Michael Kevane," On Data Ethics: An Interview with D.J. Patil,” and "On Data Ethics: An Interview with Iman Saleh."

 

May 31, 2019
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: