Skip to main content
Markkula Center for Applied Ethics

The Unanticipated Consequences of Technology

Tim Healy

1. Introduction

When the successful cloning of a lamb called Dolly was announced in February of this year by Scottish researchers, it set off a spate of anxious questions. Many of them concerned the ethics of cloning, but another set asked about the unanticipated consequences. If we go down the cloning road, where will it lead? The answer is that we don't know. All of our technological roads twist and turn, and we can never see around the bend or through the fog.

The purpose of this paper is to investigate the ubiquitous phenomenon of unanticipated consequences. We begin with a look at some definitions which shed light on the matter, and then consider the nature of change. This leads to a broadening of the definition of the word 'technology', and a look at what was one of our earliest examples of unanticipated consequences. We then address the crucial question of why we have such consequences. Some additional examples follow, and we then look at what society does in the face of unanticipated consequences. The paper concludes with a discussion of some of the ethical implications of acting when we know that there can be unanticipated consequences to our actions.

2. Some Definitions

It is important here to distinguish between unanticipated and undesired consequences. The former are consequences which are not foreseen and dealt with in advance of their appearance. Undesired consequences are those which we are harmful, but which we are willing to accept, or accept the risk of occurring. Consequences may be:


  • intended and desired

  • not desired but common or probable

  • not desired and improbable


  • desirable

  • undesirable

As an example, consider the development of a nuclear power plant at an ocean site. The anticipated and intended goal or consequence is the production of electric power. The undesired but common and expected consequence is the heating of the ocean water near the plant. An undesired and improbable consequence would be a major explosion, and we would associate the term 'risk' with this outcome, but not with the heating of the water.

An unanticipated and desirable consequence might be the discovery of new operating procedures which would make nuclear power safer. An unforeseen and undesirable consequence might be the evolution of a new species of predator fish, in the warmed ocean water, which destroy existing desired species.

This paper concentrates on unanticipated consequences of our technologies. Anticipated negative consequences have been dealt with extensively in the literature on risk. See, for example, Margolis (1996), and Bernstein (1996). The latter emphasizes the role of mathematics in risk assessment.

Two brief points should be made before we proceed. The first is that change is always with us. Even without the intervention of human beings, nature changes constantly. Continents move, weather changes, species evolve, new worlds are born and old ones die. The second point is that all change seems to involve unanticipated consequences. Hence, the unanticipated is a part of life. There is no absolute security. Unanticipated consequences can be mitigated, largely through the gaining of additional information or knowledge, but not eliminated. That's the nature of our life, natural and human.

3. A Broader Definition of Technology

Although we focus here on the term 'technology' as it is usually taken, it is worth pointing out that human beings do much that has unanticipated consequences, in all areas of life, certainly including, for example: medicine, business, law, politics, religion, education, and many more. Because of the parallels among these fields it is useful to think of a broader definition of technology, such as "...that which can be done, excluding only those capabilities that occur naturally in living systems." (Benziger) This matter is dealt with in some detail by in Technopoly: The Surrender of Culture to Technology (Postman).

Seen in the light of this broader definition, writing is one of our first technologies. Postman recalls the story of Thamus and the god Theuth, from Plato's Phaedrus, as an example of unanticipated consequences. Theuth had invented many things, including: number, calculation, geometry, astronomy and writing. Theuth claimed that writing would improve both the memory and the wisdom of humans. Thamus thought otherwise.

Theuth, my paragon of inventors, the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it. So it is in this: you, who are the father of writing, have out of fondness for your off-spring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their internal resources. What you have discovered is a receipt for recollection, not for memory. And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society.

It is true in all of our technologies, the discoverer of an art, or the designer of a new system is not usually the best judge of the good or harm which will accrue to those who practice it. And yet, paradoxically, it is often the designer to whom we must go to ask for the likely outcomes of her work. This is a problem with which society must wrestle, and we shall discuss later how it does so.

4. Why Do We Have Unintended Consequences?

Dietrich Dorner has recently analyzed systems in a way that can help us see why they can be so difficult to understand, and hence why consequences are unanticipated. Dorner has identified four features of systems which make a full understanding of any real system impossible. These are:

  • complexity

  • dynamics

  • intransparence

  • ignorance and mistaken hypotheses

Complexity reflects the many different components which a real system has and the interconnections or interrelations among these components. Our system models necessarily neglect many of these components or features, and even more so their interrelations, but there is always a danger in doing so, because it is from such interrelations that the unanticipated may arise. Our economic system is an example of a highly complex system. Not only are there many players, but the players are also interrelated in many ways which are difficult to identify and define. If Player A sets this price, how will Player B respond, and what will Player C think and do when she observes the actions of A and B?

Many devices and systems exhibit dynamics, that is, the property of changing their state spontaneously, independent of control by a central agent in charge of the system. One of the most fascinating examples of our time is the Internet, an extraordinarily dynamic system, with no one in charge. There is no way to model the Internet system in a way which will predict its future and the future of the people and things which will be impacted by the Internet. Many of our complex technological systems have this property. Examples might include: a new freeway system, nuclear power, high definition television, genetic engineering. For example, a freeway system is dynamic because a large number of players initiate actions beyond any central control. Driver A slows down to observe an accident, Driver B responds in an unpredictable way, depending on his skills, state of mind, sobriety perhaps, and other factors. The system, though structured to some degree, is in many ways on its own.

Intransparence means that some of the elements of a system cannot be seen, but can nevertheless affect the operation of the system. More complex systems can have many contributors to intransparence. In the Internet, for example, the list would include almost all of the users at a particular time, equipment failures at user sites, local phenomena, such as weather, which affect use of the Internet at other locations. We need to understand that what you can't see might hurt you.

Finally, ignorance and mistaken hypotheses are always a possibility. Perhaps our model is simply wrong, faulty, misleading. This last problem is particularly interesting and important, because it is the one we can do something about. We can take steps to reduce our ignorance, to increase our understanding, as we shall discuss in Section 6. And in Section 7 we argue that we are obliged to do so.

Lets look next at some other perspectives on this problem. Peter Bernstein has addressed the matter from the viewpoint of probabilities and economics. He points out that economists have sometimes believed that deterministic forces drive our societies and their enterprises. More contemporary economists have seen less order. Bernstein puts it this way.

The optimism of the Victorians was snuffed out by the senseless destruction of human life on the battlefields (of the First World War), the uneasy peace that followed, and goblins let loose by the Russian Revolution. Never again would people accept Robert Browning's assurance that "God's in his heaven:/All's right with the world." Never again would economists insist that fluctuations in the economy were a theoretical impossibility. Never again would science appear so unreservedly benign, nor would religion and family institutions be so unthinkingly accepted in the western world. ...

Up to this point, the classical economists had defined economics as a riskless system that always produced optimal results....

Such convictions died hard, even in the face of the economic problems that emerged in the wake of the war. But a few voices were raised proclaiming that the world was no longer what once it had seemed. Writing in 1921, the University of Chicago economist Frank Knight uttered strange words for a man in his profession: 'There is much question as to how far the world is intelligible at all...It is only in the very special and crucial cases that anything like a mathematical study can be made.'

Edward Tenner takes still another perspective on the phenomenon of unanticipated and unintended consequences. He sees in some of our technologies a "revenge effect" in which our perverse technologies turn against us with consequences which exceed the good which had been planned.

Security is another window on revenge effects. Power door locks, now standard on most cars, increase the sense of safety. But they have helped triple or quadruple the number of drivers locked out over the last two decades - costing $400 million a year and exposing drivers to the very criminals the locks were supposed to defeat.

We shall return to this issue of perversity in Section 6 when we see how society attempts to deal with unintended consequences.

For Dorner on engineering, for Bernstein on economics, for Tenner's perverse technologies the message is the same. The world is not knowable and predictable. It's complexities are too great, its uncertainties beyond our understanding. Some unanticipated consequences are a necessary feature of all of our enterprises. But this does not mean that we should give up the effort to reduce uncertainty. We shall return to this in Section 7 on ethical implications.

In the next section we turn to some examples of such consequences. Then in Section 6 we consider how society responds to the problem of unanticipated consequences.

5. Some Examples

In this section we consider some anecdotes, some cases, of consequences which were not anticipated. We follow a historical path in this effort. We have already reached back to a time before the dawn of human history for a story of the invention of writing. Now we jump forward to the last two hundred years, touching on some of the effects of the Industrial Revolution, and moving on to questions which are being asked today about newly proposed technologies.

James Beniger's The Control Revolution: Technological and Economic Origins of the Information Society, traces in some detail the evolution of technological development over the past two centuries, particularly in the United States. While Beniger stresses the role and need for control in technology, he does not pay a great deal of explicit attention to consequences. But, the implicit implications of the changes in speed brought on by the Industrial Revolution are clear.

Speeding up the entire societal processing system...put unprecedented strain on...all of the technological and economic means by which a society controls throughputs to its material economy. Never before in history had it been necessary to control processes and movements at speeds faster than those of wind, water, and animal power - rarely more than a few miles an hour. Almost overnight, with the application of steam, economies confronted growing crises of control throughout the society. The continuing resolution of these crises, which began in America in the 1840s and reached a climax in the 1870s and 1880s, constituted nothing less than a revolution in control technology. Today the Control Revolution continues, engine of the emerging Information Society.

The Twentieth Century was to bring still another quantum leap in speed with the development of aviation. Perhaps no American is a better metaphor for the growth of technology in this century than Charles A. Lindbergh. Lindbergh's fascination with emerging technologies in the first decades of this century mirrored that of the nation as a whole, though in Lindbergh's case it was tempered by a love of nature.

I loved the farm, with its wooded river and creek banks, its tillage and crops, and its cattle and horses. I was fascinated by the laboratory's magic: the intangible power found in electrified wires, through which one could see the unseeable. Instinctively I was drawn to the farm, intellectually to the laboratory. Here began a conflict between values of instinct and intellect that was carried through my entire life, and that I eventually recognized as inherent in my civilization.

In 1927 Lindbergh symbolized the triumph of technology when he flew alone across the Atlantic Ocean, and electrified the world. But the euphoria did not last, as it never does. Two years later the great depression hit, and the decade to follow saw the rise of Hitler, and the terrible destruction brought on by the Second World war, with its technologies so dependent on aviation. Lindbergh began to question the idea of progress.

Sometimes the world above seems too beautiful, too wonderful, too distant for human eyes to see, like a vision at the end of life forming a bridge to death. Can that be why so many pilots lose their live? Is man encroaching on a forbidden realm?...Will men fly through the sky in the future without seeing what I have seen, without feeling what I have felt? Is that true of all things we call human progress - do the gods retire as commerce and science advance?

As a college youth, I thought civilization could never be destroyed again, that in this respect our civilization was different from all others of the past. It had spread completely around the world; it was too powerful, too universal. A quarter-century later, after I had seen the destruction of high-explosive bombs and flown over the atomic-bombed cities of Hiroshima and Nagasaki, I realized how vulnerable my profession - aviation - had made all peoples. The centers of civilization were the centers of targets.

In the end Lindbergh found a reconciliation between the world of nature and spirit, and the world of technology, a balance between what Eliade has called the Sacred and the Profane. He came to see that a balance was essential, and that technology is good when it helps to preserve that balance

Decades spent in contact with science and its vehicles have directed my mind and senses to areas beyond their reach. I now see scientific accomplishments as a path, not an end; a path leading to and disappearing in mystery...Rather than nullifying religion and proving that "God is dead", science enhances spiritual values by revealing the magnitudes and the minitudes - from cosmos to atom - through which man extends and of which he is composed.

As the undesired consequences of much of Twentieth Century technology became evident to Lindbergh, his response was not a rejection of technology, but rather a turning to the fundamental questions of why we are here. After his death Susan Gray put it this way.

Of all the man's accomplishments - and they were impressive - the most significant is that he spent most of his life considering and weighing the values by which he should live.

By the second half of the Twentieth Century we had become painfully aware that our technologies are not unmixed blessings, that they can have fundamental effects on the way we live. Lets look at some more prosaic examples from the past couple of decades. We'll start with one from Charles Handy's The Age of Unreason.

Microwave ovens were a clever idea, but their inventor could hardly have realized that their effect would ultimately be to take the preparation of food out of the home and into the, increasingly automated, factory; to make cooking as it used to be into a matter of choice, not of necessity; to alter the habits of our homes, making the dining table outmoded for many, as each member of the family individually heats up his or her own meal as and when they require it.

Tenner raises an example which is particularly interesting for two reasons. First, it is not clear which of a number of technologies is causing the unanticipated effects. Second, the issue is intensely political and interpersonal, partially because of the first reason. This is the matter of the effect which various erosion control technologies have on the condition of coastal beaches. In Tenner's words:

People concerned about the coasts are likely to dispute when and where environmental revenge effects are happening. Whatever more rigorous research may show, it is clear that the shoreline is a zone of chronic technological difficulty. Just as logging and fire suppression alter the forest's composition and fire ecology, compelling more and more vigilance, so beach protection feeds on itself by establishing a new order that needs constant and ever more costly maintenance.

Now lets turn very briefly to two examples of emerging technologies whose major unanticipated consequences we have yet to experience. We shall look at the Internet, and at cloning.

Actually, the Internet has already had a very significant impact on human life, involving the ways in which we meet each other, the ways we transact business, the ways we share information, and many more. Still, all of this is surely only the small tip of a huge iceberg, which seems very likely to change our lives in ways which we cannot today imagine. We cannot begin to anticipate the consequences of this technology.

The other technology which has stirred the public imagination in the waning years of this century is the cloning of animals, and the possibility that we may eventually be able to clone human beings. There has been no dearth of questions about the future raised by this subject. Here are just a few.

  1. Will cloning of human beings change what it means to be human?

  2. What good might come from the development of cloning?

  3. Should we halt research on cloning animals because it might lead to human cloning?

  4. If the government takes no action to control cloning could that decision be worse than a decision to take some specific action?

  5. Is it possible to control cloning effectively?

Each of these questions speaks to the uncertainty inherent in the actions which we might take in this field.

We have surveyed here just a few examples which make concrete the concerns which we may have about the unintended and unanticipated consequences of our actions and our enterprises. In the next section we ask what society as a whole does in the face of uncertainty.

But before we get to society, we really must say something about the individual. It is clear, but nonetheless worth stating, that each of us often has the opportunity and the right to reject the unanticipated consequences of a technology, by refusing to use the technology in ways which have undesirable consequences. Whether a technologies is for good or for ill must be our choice. We can use it to enrich our lives or to let our lives lose all meaning. Sometimes a technology is so pervasive that we cannot escape it, but often we have the freedom to choose.

The microwave oven is a good example of a technology whose unanticipated consequences can be rejected if we so choose. We don't always have to accept the fast food approach if we choose not to. The violence of television and the pornography of the Internet are not forced on us. The contribution which the automobile makes to a sedentary life can often be rejected. If we become a slave to our telephone or other like media, it is not the telephone which should accept the blame. Discipline is still a virtue, for ourselves and for our children.

6. Society's Response

In this section we turn to the question of what individuals and societies do in light of the fact that their actions will have unanticipated consequences. We begin with an expansion of the discussion began in Section 4 on why we fail to anticipate consequences fully. Then we ask what specific steps can be taken to reduce uncertainty. Finally, we ask how people respond to proposals for new actions, and how this helps set the course of our actions.

The first part of this section is based largely on an outstanding study by Robert K. Merton titled "The Unanticipated Consequences of Purposive Social Action". Merton chose his title carefully. His use of the word 'unanticipated' rather than 'unintended' helped motivate the brief discussion on terminology in Section 2 above. The word 'purposive' is meant to stress that the action under study involves the motives of a human actor and consequently a choice among various alternatives.

Merton begins by stating that there had been up to the time of the article (1932) no systematic, scientific analysis of the subject. He surmises that this may be because for most of human history we attributed the unexpected to 'the gods', or 'fate', or divine interference. With the dawning of the Age of Reason, we began to believe that life could be understood. We didn't have to leave it to 'the gods'. It is curious that in this century that optimism, may we say 'faith', in human understanding of the complexities of life began to fade. Later we shall see this view espoused by the economist Frank H. Knight, in the first half of this century, and by another economist, Kenneth Arrow, writing in the second half of the century. In a sense we have come full circle, though today our uncertainty is not generally attributed to 'the gods' as such.

Merton cautions us that there are two pitfalls to be aware of in considering actions and consequences. The first is the problem of causal imputation, that is, the matter of determining to what extent particular consequences ought to be attributed to particular actions. The problem is exacerbated by the fact that consequences can have a number of causes. Lets consider an example.

Periodically the Federal Reserve Board changes the short-term interest rate in an attempt to maintain a balance between the health of the economy and the inflation rate. What makes the problem difficult is that inflation rates over a given period of time are dependent on many factors, including, for example: the short-term interest rate, consumer confidence, employment rates, technological productivity, and even the weather. So, if the inflation rate follows a certain path over a given one-year a period, to what extent should we attribute the path to the actions taken by the Federal Reserve?

The second pitfall is that of determining the actual purposes of a given action. Suppose that in a given year the unemployment rate drops from 8% to 6%, and the President claims that this drop was due to a series of social programs pushed by the Administration. How do we know if this was in fact the cause, or at least a major cause, of the result. This is of course an important question because it helps us decide whether this same action in the future may be desirable. It is of course a very difficult question to answer. Merton suggests a test: "Does the juxtaposition of the overt action, our general knowledge of the actor(s) and the specific situation and the inferred or avowed purpose 'make sense'?"

It is clear that the major limitation to the correct anticipation of consequences is our state of knowledge. Our lack of adequate knowledge can be expressed in a number of classes of factors. The first class derives from the type of knowledge which is obtained in the sciences of human behavior. The problem is that such knowledge tends to be stochastic or conjectural in the sense that the consequences of a repeated act are not constant, but rather there is a range of consequences, any one of which may arise. Consider again the case of the Federal Reserve increasing the short-term interest rate by say 0.25%. This action may be repeated a number of different times over the years, with a number of different consequences to the inflation rate. In this sense the consequence is stochastic or random, though probably within a fairly small range. The reason that we do not get a dependable result is that many factors influence the inflation rate, as we saw earlier. We do not know exactly how these other factors will influence the rate, nor do we know how the various factor will interrelate with each other, with secondary effects on inflation.

Another class of factors is error. We may, for example, err in our appraisal of the situation. We may err by applying an action, which has succeeded in the past, to a new situation. This is a particular common mistake. It has been said that we are creatures of habit, and with good cause. Much of our lives are lived repeating the same or very similar actions (eating, driving, walking, etc.), and it is absolutely necessary that we have in place habitual ways of carrying out these actions to accomplish a desired end. And it is natural that we extend our habit of doing things habitually to areas where situations have changed. This is a problem that we must be aware of constantly. We may also err in the selection of a course of action. We may not choose the correct thing to do. We also may not do what we do well. And finally, we may err by paying attention to only one factor affecting the consequences.

Another of Merton's factors which limits our ability to anticipate consequences is what he calls the "imperious immediacy of interest," which refers to situations where the actor's concern with the immediate foreseen consequences excludes consideration of longer-term consequences. One's actions may be rational in terms of immediate results, but irrational in terms of one's long-term interests or goals.

A related phenomenon has to do with one's basic values, and a strange twist of fate that sometimes arises. Suppose that one's immediate values call for frugality and hard work, a "Protestant ethic". Such an individual may well end up accumulating a significant amount of wealth and possessions. On the other side of the coin one whose values call for spending and conspicuous consumption may well end up with little wealth in the long run. This phenomenon is explored in some detail by Stanley and Danko.

The final point which Merton makes is that the very prediction of a consequence becomes a new factor in determining what will ensue as a result of some action. Prediction is a new variable in the complex of factors which lead to consequences. Consider again, for example, the case of the actions of the Federal Reserve Board. Suppose that the short-term interest rate is increased by 1/2% and that a major financial leader predicts that this will cause the stock market to drop by 10%. This prediction will almost certainly affect, in one way or another, the stock market.

In a beautiful brief biographical essay the economist Kenneth Arrow adds a further caution for the anticipator of consequences. Arrow believes that "most individuals underestimate the uncertainty of the world." The result is that we believe too easily in the clarity of our own interpretations. Arrow calls for greater humility in the fact of uncertainty, and finds in the matter a moral obligation as well.

The sense of uncertainty is active; it actively recognizes the possibility of alternative views and seeks them out. I consider it essential to honesty to look for the best arguments against a position that one is holding. Commitments should always have a tentative quality.

A related idea has been expressed recently by Stephen Carter who believes that 'integrity' is more than acting out one's convictions. For Carter integrity has three parts:

  • discerning what is right and wrong

  • acting on what you have discerned

  • saying openly that you are acting on what you have discerned

The process of discerning requires an active search for the truth. We are not free just to act on our beliefs. We are obliged as well to actively challenge our beliefs, search for more appropriate beliefs, and adopt them (tentatively, of course) as we find them.

Next we turn to an analysis of ways in which we can reduce the uncertainties which are a part of our complex lives. This discussion is based largely on the pioneering work of Frank Knight in the first decades of this century. Knight argues that we can decrease uncertainty in four ways.

  • Increase knowledge

  • Combine uncertainties through large scale organization

  • Increase control of the situation

  • Slow the march of progress

Clearly an increase in knowledge can help us reduce uncertainty. We can carry out additional studies, analyses, experiments. The major problem with this approach is the cost in money and time. It also requires of course, as we have just seen, the recognition that we do not already have complete knowledge of the situation.

The typical way in which we combine uncertainties through large scale organizations is with one form or another of insurance. A group of people come together to protect each other against serious loss in case of a catastrophic consequence. The cost in in money and perhaps freedom.

It is often possible to reduce uncertainty through control. For example, the government might attempt to reduce inflation as a consequence of Federal Reserve actions through price controls. The use of controls generally has a monetary cost and a reduction of freedom.

Finally, we can reduce the level of uncertainty by slowing the march of progress. This might seem draconian at first, but in fact we do this all the time, when we take time to study a problem, do some more tests, write an environmental report, take the matter to the planning commission. Sometimes it is even more dramatic. Two days after the news was released about the successful cloning of a sheep, the President of the United States announced a moratorium on federal funding of any cloning work until the matter could be studied further.

There is a final point to be made, which has been noted by a number of people. As much as we would like to reduce much of the uncertainty in life, we would not choose to eliminate all uncertainty. A life in which everything was predictable, was known before the fact, would be a boring life indeed. We have been given a world to live in which is inherently unpredictable. That's the bad news and the good news, all at once.

In the final part of this section we consider the question of how we respond or react to proposals for change made by others, in the light of the uncertainties, the unanticipated consequences of all of our actions. Such proposals might be projects for massive social change - lets eliminate welfare programs - or they might be very personal individual decisions - lets buy that house by the lake. Whether it be in the halls of Congress, or the dining room table, when one person makes a proposal, another reacts.

Albert Hirschman has written a fascinating analysis of negative reactions to proposals, titled The Rhetoric of Reaction: Perversity, Futility, Jeopardy. The latter three terms are types of reaction. Lets look at each in turn.

An argument from perversity says that the opposite will happen from that which you claim. Suppose you suggest that welfare be eliminated in order to save taxpayers' money. An argument from perversity might be that such an action will in fact have just the opposite consequence. If you eliminate welfare, crime will increase, more prisons will have to be built, and the cost to taxpayers will increase, not decrease. The argument which Thamus makes to Theuth in Section 3 is another example of an argument from perversity.

An argument from futility suggests that your action will have no effect on the situation which you are trying to change. You propose a law to regulate pornography on the Internet. An argument from futility would say that your law will not change anything, because it will be ignored.

A jeopardy argument claims that your proposal will place in jeopardy some valuable resource. You propose to put in a new freeway along the creek heading up toward Central City. Your opponent claims that such a freeway will ruin beloved old Riverside Park because of the noise and pollution.

7. Ethical Implications

Ethics is about what we ought to do. But how do you decide what you ought to do when the outcomes of your actions are uncertain? In this section we consider this problem. There are no simple answers to this very important question, but there are some general principles which we can set forth, and which may or may not be applicable in a particular situation. It is almost certainly not the case that all of them would apply in a given situation, since there will in general be conflicts between them. The purpose of this section is to state these principles and say a little about their application.

One should take advantage of the opportunities to reduce uncertainty which are discussed near the end of Section 6 to the extent that the costs permit. We are not obliged to exhaust all of our resources, of money or freedom or other, to reduce uncertainty, because this could easily outweigh the good to be gained from the action. But to the extent that reductions can take place with reasonable cost, it seems morally prudent to do so. The judgment of what is an acceptable cost to bear may be very difficult to establish. In some cases it may be practical to use formal mathematical techniques, such as decision theory, to help arrive at an appropriate cost. In other cases, it may be so difficult to quantify goods and costs as to render such an approach useless. Of course, the effort to reduce uncertainly is not always an individual task, but is often a community effort, with the individual recognizing the richer and more diverse views brought by other individuals, as well as by the community in a collective sense.

All persons should share equally in the benefits of an action or a project, and they should also share equally in the risks due to unanticipated consequences. This is of course an ideal, since we cannot in general insure that such equal distribution of benefits and risks is possible. In such cases other principles must be applied.

Persons who do not share in benefits of an action should not as a rule be subject to costs and risks. Justice suggests that burdens should not be borne by those who cannot benefit. While this is a good ideal, it is often difficult to apply in a particular case. If taken to the extreme it could make it difficult or impossible to take many of our actions. For example, it would forbid the building of a coal-burning power plant on the grounds that the emissions from the plant could affect the environment of the entire globe, including that of some individuals who could not expect to benefit from the electric power. There is a corollary to this principle.

Persons who gain some benefit from an action should be able to choose their level of cost and risk. There are many situations in which some stand to gain a great amount from a project, and others to gain relatively little. To the extent possible, each person should be able to choose their level of cost. It is in this way that one's rights may be preserved. Of course joint projects do not always allow this. Communities almost always form themselves in ways such that the majority is able to be a tyrant to the minority. For example, a community may decide to initiate a flood control project. A given individual may be concerned that there are possible undesired consequences to him from the project. A rights approach should support his concerns, but common good might not.

Projects affecting more than one individual provide the greatest balance of benefits over harms for all involved. This is the utilitarian principle. This of course provides us with a way to approach the flood control project. On the basis of anticipated consequences we might well decide to go ahead with the project on utilitarian grounds. But it seems particularly important that we try as much as possible to anticipate as many consequences as possible because of the potential threat to the rights of some.

In the assessment of value to others, we ought to recognize that a resource has greater value to one who is poor than to one who is rich. If you give ten dollars to a poor man you improve his life much more than if you give ten dollars to a rich man. The nonlinearity of wealth should be considered in decisions on actions. And in fact the principle of justice suggests that our first thoughts should be for those who have the least in our society.

We ought to recognize that consequences of an action may extend over the long-term, and that the effects of such consequences on the actor or on others must not be ignored. It is not right to assume that our actions and their consequences are necessarily limited to "here and now". In fact their effects may extend over great distances, perhaps over the entire earth, and may extend in time for years, decades or perhaps even centuries. We are obliged to take these factors into consideration to the extent that we can.

Persons should face the truism that life is extremely complex and that all positions should be tentative. It is inherently dishonest to assert that one knows what is best when this is not the case, and it almost never is the case, again because of the complexity of life. Implicit in this principle is the requirement on all of us to seek to refine and improve our positions, to speak with humility about the consequences of our actions, and to act, when necessary, with a clear understanding that we do not 'own the truth'. Each of us has an inherent right to his or her world view or mental model of life. I do not have the right to assume that my view of the world is correct and your view is incorrect, to assume that 'I am right, and you are wrong.'

8. Conclusions

We have attempted in this paper to outline some of the salient features of the problem of unanticipated consequences. Because the problem is so common to all of what we do in life, it should not come as a surprise that the study has led to some very general results or positions which we might apply to all of our lives. We close here with a brief list of some of these results.

  • Life is very complex, more so than we admit.

  • All of our actions have unanticipated consequences.

  • We bear a moral obligation to take our positions tentatively, with humility in the light of our ignorance.

  • Short-term and long-term values are often different, often contradictory.

  • Uncertainty can be reduced but there is always a cost.

  • It is desirable to reduce uncertainty - but not to eliminate it.

In the end we are left with a dilemma - which is hardly surprising. We act with uncertainty about the consequences of our acts, and yet we have to act, for even to do nothing is to act, and there will be consequences. Change is an inherent part of life. Part of that change is natural, part is within our control. We have a right to act, but we also have an obligation to accept some level of responsibility for the unanticipated consequences of our actions. That level of responsibility is as hard to define as the unanticipated consequences themselves, but it is there nonetheless.


Kenneth Arrow, I Know a Hawk from a Handsaw, in Eminent Economists: Their Life Philosophies, Cambridge University Press, Cambridge, 1992

James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society, Harvard University Press, Cambridge, 1986

Peter Bernstein, Against the Gods: The Remarkable Story of Risk, John Wiley and Sons, New York, 1996

Stephen Carter, Integrity, Basic Books, New York, 1996

Dietrich Dorner, "The Logic of Failure: Why Things Go Wrong and What We Can Do To Make Them Right", Metropolitan Books, New York, 1989, (EnglishTranslation, 1996)

Susan Gray, Charles A. Lindbergh and the American Dilemma: The Conflict of Technology and Human Values, Bowing Green State University Popular Press, 1988

Charles Handy, The Age of Unreason, Harvard Business School Press, Boston, 1990

Albert Hirschman, The Rhetoric of Reaction: Perversity, Futility, Jeopardy, Harvard University Press, Cambridge, 1991

Frank Knight, Risk, Uncertainty, and Profit, Houghton Mifflin Company, Boston, 1921

Howard Margolis, Dealing with Risk: Why the Public and the Experts Disagree on Environmental Issues, University of Chicago Press, Chicago, 1996

Robert Merton, The Unanticipated Consequences of Purposive Social Action, American Sociological Review, Vol. 1, Dec., 1936, pp. 894-904

Neil Postman, Technopoly: The Surrender of Culture to Technology, Vintage Books, New York, 1993

Thomas Stanley and William Danko, The Millionaire Next Door: The Surprising Secrets of America's Wealthy, Longstreet Press, Atlanta, GA, 1996

Edward Tenner, Why Things Bite Back: Technology and the Revenge of Unintended Consequences, Knopf, New York, 1996

Apr 6, 2005