Skip to main content
Markkula Center for Applied Ethics

Artificial Intelligence, Decision-Making, and Moral Deskilling

The Land of Cockayne

The Land of Cockayne

Brian Patrick Green

Pieter Bruegel the Elder, “The Land of Cockayne,” 1567. Public Domain.

Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. Views are his own. [1]

Moral de-skilling is the loss of skill at making moral decisions due to lack of experience and practice. As we develop artificial intelligence technologies which will make decisions for us, we will delegate decision-making capacities to these technologies, and humans will become deskilled at making moral decisions, unless we endeavor not to be so. Shannon Vallor at Santa Clara University developed this idea in a 2015 paper “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.” [2] Here I want to explore what deskilling is, the larger context around moral deskilling, consider its causes, why it is problematic, and some possible solutions.

How might we think of moral deskilling?

A comparable concern for deskilling can be found among airline pilots. With the advent of highly sophisticated autopiloting systems, it is technically possible to automate every aspect of air travel from takeoff to landing. However, airlines and pilots have elected not to do this, instead reserving autopilot only for the boring, uneventful parts of flight. Why? Because those are precisely the parts that require the least skill. The takeoff and landing – the parts that require the most skill – are exactly the parts that the pilots must not lose skill at, because if they did they would become dependent on the autopilot. Then, if the autopilot failed they might not be able to take over from it with an adequate level of skill, especially in an emergency.

Morality is like this – we can autopilot a lot of it (it’s easy for most of us to not lie, steal, and commit violence) – but for some of it, the truly difficult ethical situations in life (resisting pressure to do evil, discerning right action, judging complex cases, etc.) we need to rely on our own skills, and hope that they are adequate to our needs. Other analogous cases of AI-induced deskilling can be found in navigation (blindly following AI directions), personal relationships (relying on apps to find dates), finance (large-scale fintech), news & information (can we discern real from fake?), and so on.

What is the larger context surrounding deskilling?

Moral de-skilling can be thought of as one small and particular effect of a very large and long term movement through human history and evolution. This trend is the drive towards organization, specialization, and complexity. This trend exists because specialization permits efficiency, and with efficiency, energy is freed up for further more sophisticated actions, in a self-reinforcing cycle. This can be elucidated by looking at some of the differences between more-complex and less-complex societies [3].

More-complex societies are: specialized, centralized, systematized, interdependent, organized, efficient, and yet brittle, fragile. We can do much more collectively if we do less individually, e.g., less farming, cooking, driving, etc. But the risk is that in a disaster, a systemic breakdown, many people will be thrown into situations they are unprepared for, without sufficient training or resources (because efficiency eliminates having “extra” for the sake of doing “more”).

Less-complex societies are: less-specialized, decentralized, less-systematized, independent, less-organized, inefficient, and yet tough, robust. We can live in less risk if we all know how to do everything, but the trade-off is lack of coordination and cooperation, failure to do complex things, and immense wasted talent and production. The do-it-yourself (DIY) and “prepper” movements typify the romanticized desire to return to this less complex mode of life, or to at least be prepared should the intricate yet delicate edifices of technological society crumble.

Knowledge that is not practiced is lost. Take, for example, something as simple as growing food. How many people in the “developed” world can now manage a subsistence farm? Or hunt or fish well enough to survive? Not very many. This means that we are deskilled: the people of the past could do things that we are no longer capable of, at least not without significant training and preparation.

This is partly a good thing! It means that we are free to do other more-complex work. But as a side-effect, we are in many ways unskilled compared to humans at previous levels of technology.

How did we get into this state where skills grow and fade over time, where entire occupations appear and disappear as generations pass?

In the past it was much easier to specialize humans towards particular jobs than it was to specialize technologies towards particular jobs. Humans were the best intelligences and muscles around (except sometimes animals). But those days are slowly ending: specialization is now away from humans and towards technology.

  • First we specialized human knowledge and muscle into particular humans.
  • Next we specialized human knowledge and muscle into machines.
  • Now we are specializing intelligence itself into machines (perhaps thereby leaving humanity without a job?).

All of this specialization of human skills into machines allows for incredible efficiencies, never before seen in human history. We can achieve immense productivity with much less labor. While in the past almost all human labor went into producing food, now relatively little human labor is involved in producing food. Automation and machines have revolutionized society.

Continuing in this vein of automation, AI is an amplifier and an accelerant. It takes what we want and gets it faster and more effectively than ever before. If technology is nature, sharpened (like a stone knife or sharpened stick, honed to a precise use), then AI is natural human intelligence, sharpened. Whatever we can imagine we will soon be able to specialize and do better than we can now do – both good and evil. Which bring us to ethics.

Might AI replace our moral decision-making capacities?

In many cases, yes, it may. We already modify human behavior through law, government, and culture. Specialized citizens make and enforce laws so as to promote or suppress certain behaviors. Therefore we typically have to think less about our moral choices than would, say, people living in anarchy. AI would merely extend this power to even more of life and with even more control. And as computers do more, humans will do less – humanity will turn over vast areas of decision-making to automated systems, and we will lose skill at those tasks.

Whether people turn over their capabilities to other people (through division of labor) or to automation, those skills become specialized and thereby less common. If we turn many complex capabilities over to AI, soon those specialized skills may become very rare. Then very few people may be able to perceive if such AIs are making mistakes and we will be living in a world that we have made, and yet be at the mercy of things we no longer understand or control.

So then, are we bound for a lazy, negligent, and unethical future? There are at least six specific ways that AI could lower our moral capacities, in three broad categories.

A) Attacking Truth & Attention

1. Poor education at all levels – for many students our educational system is not working well, but beyond that our media and other information systems are also being corrupted by misinformation and disinformation, much of it pushed by AI. Thus we also damage the education system necessary for an informed citizenry.

2. Distraction – AI powered games and apps are draining our attention towards trivialities and away from the important things in life such as caring relationships and thinking about solving larger-scale problems, personal and social.

B) Preventing human maturation & moral development

3. Tech as “parent” / human “infantilization” – in many ways technology seems to “parent” us, helpfully giving us things or telling us what to do, which thereby infantilizes us and decreases our ability to engage life in a mature, skillful, confident, and independent way. AI will accelerate this trend dramatically.

4. Stunted moral experience – moral development requires practice. If “practice makes perfect,” then “lack of practice makes imperfect.” The more time we spend attending to AI-driven manipulations of our psychology, the less time we spend attending to relationships, caring about others, and thinking about ethical problems, the worse we will be at ethics.

C) Normal and weaponized complexity

5. “Normal” complexity – AI is simply going to make much of normal life too complicated for most humans, even very intelligent humans, to understand. In an increasingly complex world, understanding will no longer be an expectation, and in the midst of this lack of understanding, many bad things could happen.

6. Weaponized complexity – If understanding is no longer an expectation, humans will become even easier to deceive and manipulate, and no doubt some people will use AI systems precisely for this purpose, as they already do.

These are a few of the threats that are apparent now; no doubt more will become apparent in the near future. Of note is that all the trends towards technology harming us are not actually trend in tech, but trends among people. Humankind, in some sense (even if only subconsciously or due to our own vices), wants to be uneducated and misinformed, distracted, infantilized, stunted, and too simple to understand the world. And those of us producing and using technology, in effect if not explicit intention, want us to be these ways as well.

As C.S. Lewis noted in 1943 in The Abolition of Man “what we call Man’s power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument.” [4] Technology does not operate independently of human choices, at least not yet. So now the question becomes: How can we respond? Here are six antidotes to the above threats.

A) Education

1. Education – There is great potential to use AI for enhancing and personalizing education (including through VR) and for fighting against misinformation and disinformation. Education should teach and reward not just knowledge & understanding, but practical wisdom and moral leadership. Education should work harder to inculcate good moral habits and teach moral attention. Additionally, just as humans should not weight falsehoods equally to truths, automated systems should not either, whether for their own decision-making or in the materials they promote. AIs can help protect the information ecosystem as well as help humans become more discerning in their assessment of facts.

2. Attention – rather than harming our attention, AIs could help us train our moral attention by filtering out distractions and highlighting ethical issues. It may seem like a minor issue, but the fact that attention is a multi-billion dollar industry ought to clue us in that attention is worth money because it is the very foundation for any further thinking on any issue. If we never notice an ethical issue we can never solve it – therefore we need to notice it. Only then can we move on to a more sophisticated analysis.

B) Human maturation & moral development

3. Become adults – we should strongly resist technology which seeks to act like our parents or act to infantilize us. Instead, AI might help us develop moral maturity and discernment, helping train us with virtues such as restraint, practical wisdom, and courage. But the AI cannot make decisions for us, thus fostering dependency; the key is to promote these skills in humanity, helping us to become independent moral decision-makers.

4. Interact with other humans – rather than stunting our interpersonal growth through screens, AI could encourage us to spend more time with others face to face and thereby build stronger interpersonal relationships. Most of the moral life happens just through our everyday interactions with others, and if, instead of having those interactions, we are spending time on other activities (even if those are good activities) we will not gain practice and moral expertise.

C) Ethics facilitation

5. Dealing with complexity – As the world grows more complex we will likely need AI to deal with that complexity for us, unfortunately simultaneously leading us to depend on those simplifying AIs even more. But can AI help us with complexity? If AI can help us solve the easier problems in life, could we instead concentrate on solving the biggest ethical problems such as world peace, hunger, healthcare, and so on? Peter Maurin once described his life’s work as trying to “make the kind of society where people find it easier to be good.” [5] How might AI help us create that society, while respecting our autonomy and moral development?

6. Stopping weaponized complexity – AI can help us expose when bad actors are using complexity as a weapon to deceive and manipulate us, but the task is difficult and constantly changing. In the future it will become even more important to develop AI systems to fulfill this function, as this is something of an arms race, with weaponized complexity so far seemingly having the upper hand (e.g., with disinformation and misinformation, and various other intentionally complex problem).

While these solutions are difficult, they are worth the challenge. Practical wisdom will always be a defining trait of a good human being. Morality and ethics will never go out of style, and being a good person will always be a worthy goal. Despite the difficulty, humanity is up to the task; but we need to apply ourselves diligently. In the words of Wendell Berry: “The only thing we can do for the future is to do the right thing now.” [6


[1] Previous versions of this text have been presented as “AI, Decision-Making and Moral Deskilling,” at the AI and Social Good: Challenges, Opportunities and Partnerships conference, Business Ethics and AI Symposium, The Institute for Business Ethics and Sustainability at Loyola Marymount University, Playa Vista Campus, Los Angeles, October 29, 2018. Video available here: and “AI, Decision-Making and Moral Deskilling,” Ignite Talk at the Partnership on AI All-Partners Meeting, San Francisco, California, November 14, 2018. I have also written on deskilling briefly here: “Ethical Reflections on Artificial Intelligence,” Scientia et Fides 6(2)/2018: 1-23.

[2] Vallor, Shannon. 2015. “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.” Philosophy of Technology 28:107–124. available at:

[3] Green, Brian Patrick. “Emerging Technologies, Food, and Ethics,” Helen LeBaron Hilton Endowed Chair Lecture Series (2017-2018), Iowa State University, Ames, Iowa, April 24, 2018.

[4] Lewis, Clive Staples. 1944. The Abolition of Man. New York: Harper Collins, p. 55. Available online: 

[5] Day, Dorothy, “Letter To Our Readers at the Beginning of Our Fifteenth Year,” The Catholic Worker, May 1947, 1, 3, available at: and Dorothy Day, “Peter’s Program,” The Catholic Worker, May 1955, 2, available at:

[6] Berry, Wendell, “Marty Forum: Wendell Berry,” AAR 2013 Annual Meeting, November 24, 2013, Baltimore, Maryland. Audio at:

Mar 15, 2019