Skip to main content
Markkula Center for Applied Ethics

The Relationship of Morality and Technology

Artificial Intelligence

Artificial Intelligence

Building technology that facilitates good actions

(AP Photo/Ng Han Guan)

Patricia Fachin, journalist with the Instituto Humanitas Unisinos in Brazil, recently interviewed Brian Green, assistant director of Campus Ethics at the Markkula Center for Applied Ethics, about ethics and technology. The views expressed are his own. In several postings, we will share some highlights of the interview. The full interview can be found, in Portuguese, at the IHU website. In this second part of the interview, Fachin asks Green about the relationship of morality and technology.


In a recent interview you declared that the moral problems related to technology are associated with the use that humans will make of these technologies, i.e. whether they will be used for good or bad purposes. How would you define a good use of technology and a bad use of it?

A good use of technology is one which improves human physical, mental, spiritual, and moral well-being. It helps people become healthier, more educated, more loving of God and neighbor, and better at making moral decisions. A bad technology will do the opposite: make us sicker, less educated, less loving of others, and worse at making moral decisions. Technology often simply makes actions easier - and we want good technology that will facilitate good actions, not bad technologies that will facilitate bad actions. To quote Peter Maurin, a founder of the Catholic Worker Movement, we should want to "make the kind of society where people find it easier to be good.” Technology can help do that, but so far, it could be better directed.


What evidence leads you to claim that artificial intelligence might increase inequalities?

We can already see this happening, with a few tech companies and their investors making hundreds of billions of dollars through (at first) seemingly minor gains in efficiency, for example, in sales or advertising. But these minor gains quickly compound across millions of people. 

Only a few organizations are developing AI or utilizing it much, at least at the start. But in that starting time, in our worldwide economic system, AI is being used by the already rich to further enrich themselves. AI is being used to gain efficiencies in finance, markets, law, energy, transportation, communication, and so on. And the many humans who used to perform those tasks may soon be out of work, while the few who employed them will instead have their labor costs suddenly drop and revenues grow. And as revenues grow, they can further invest in technology, thus accelerating the inequality. This type of inequality is self-reinforcing, unless outside factors - moral factors - lead us to adjust the economic structure so that it benefits people more widely.

Of course AI will also likely lead to consumer products being cheaper, as the costs of production go down, and this will help consumers. But the net effect, given our current economic structures, will still likely be one which exacerbates inequality.


In that same recent interview you highlighted the distinction between thinking about artificial intelligence from the perspective of efficiency and from the perspective of morality. Do you think that in this discussion there has been more thinking in terms of efficiency or in terms of morality?

Most people that I know of are only thinking about monetary efficiency. Very few are thinking about the morality of the system overall. Human perspectives tend to get over-focused on small ideas and lose sight of the big picture. We need to see the big picture about the future we are making, and plan for it and govern it adequately, if we want that future to be better and not hellish. Making small things go right for a few people while big things go wrong for a lot of people will not lead to a better world.


What is it that actually distinguishes these two perspectives? What would it mean to view artificial intelligence on the basis of efficiency, on the one hand, and on the basis of morality, on the other?

For the first case, using AI to better advertise consumer products is a major industry right now. If AI gives you a 10% advantage in selling your product as compared to conventional advertising, that is significant and businesses will utilize it. 

Using AI to advance morality, on the other hand, might choose a completely different problem than that of advertising. Instead it might choose how to optimize healthcare or education or energy efficiency. Please note that these moral solutions also might save money, but they also might not. Perhaps in order to really improve our healthcare or educational systems we need to invest billions of dollars now, to reap benefits decades in the future. Or perhaps we might not gain benefits at all, perhaps they will only gain us further costs - after all, healthcare which extends life often extends the lives of the elderly and sick, thus costing much more money than if they had died instead. Monetary efficiency might say to let sick people die, or euthanize them, as is legal in some places. But a morality which respects human dignity cannot accept this. Killing people might save money, but it not only destroys the lives of those killed, it damages the characters of those who live on: the perpetrator and those who permit evil by inaction, thus making further, and worse, evil actions easier. 

If we embed this callousness and vice into AI computer code it may quickly take us to inhuman places where we do not want to go.

Aug 25, 2017
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: