Skip to main content
Markkula Center for Applied Ethics

Technology Ethics

In Technology Ethics, the Markkula Center for Applied Ethics addresses issues arising from transhumanism and human enhancement ethics, catastrophic risk and ethics, religion and technology ethics, and space ethics.

AI ethics and corporate tech ethics development and training are researched, created, and delivered in collaboration with Internet Ethics.

    

Overview of Technology Ethics

Brian Patrick Green, director of Technology Ethics, discusses the ethical issues that have arisen from technological advances in areas like human enhancements, artificial intelligence, and synthetic biology.

Commentary on Technology Ethics

What is Technology Ethics?

By Brian Patrick Green, director of Technology Ethics

Technology ethics is the application of ethical thinking to the practical concerns of technology. The reason technology ethics is growing in prominence is that new technologies give us more power to act, which means that we have to make choices we didn't have to make before. While in the past our actions were involuntarily constrained by our weakness, now, with so much technological power, we have to learn how to be voluntarily constrained by our judgment: our ethics.

For example, in the past few decades many new ethical questions have appeared because of innovations in medical, communications, and weapons technologies. There used to be no need for brain death criteria, because we did not have the technological power to even ask the question of whether someone were dead when their brain lost functioning – they would have soon died in any case.  But with the development of artificial means of maintaining circulation and respiration this became a serious question. Similarly, with communications technologies like social media we are still figuring out how to behave when we have access to so many people and so much information; and the recent problems with fake news reflect how quickly things can go wrong on social media if bad actors have access to the public. Likewise with nuclear weapons, we never used to need to ask the question of how we should avoid a civilization-destroying nuclear war because it simply wasn’t possible, but once those weapons were invented, then we did need to ask that question, and answer it, because we were – and still are – at risk for global disaster.

These changes obviously present some powerful risks, and we should ask ourselves whether we think such changes are worthwhile – because we do have choices in the technologies we make and live by. We can govern our technologies by laws, regulations, and other agreements. Some fundamentally ethical questions that we should be asking of new technologies include: What should we be doing with these powers now that we have developed them? What are we trying to achieve? How can this technology help or harm people? What does a good, fully human life look like? As we try to navigate this new space, we have to evaluate what is right and what is wrong, what is good and what is evil.

As an example, artificial intelligence is a field of technological endeavor that people are exploring in order to make better sense of the world. Because we want to make sense out of the world in order to make better choices, in a way, AI has a fundamentally ethical aspect. But here we need to not mistake efficiency for morality – just because something is more efficient does not mean that it is morally better, though often efficiency is a dramatic benefit to humanity. For example, people can make more efficient weapons – more efficient at killing people and destroying things – but that does not mean they are good or will be used for good. Weapons always reflect, in an ultimate sense, a form of damage to the common good, whether the weapon is ever used or not (because its cost could have been spent on something better).

Returning to AI, lots of the organizations are exploring AI with a goal in mind that is not necessarily the best goal for everyone. They are looking for something good, whether it is making sense of large datasets or improving advertising. But is that ultimately the best use for the technology? Could we perhaps apply it instead to social issues such as the best way to structure an economy or the best way to promote human flourishing? There are lots of good uses of AI, but are we really aiming towards those good uses, or are we aiming towards lower goods?

Additionally, we’ve become so powerful now that we not only have the power to destroy ourselves, but we also have the ability to change ourselves. With CRISPR and synthetic biology, we can choose to genetically modify people, and by implanting biomedical devices into our bodies and brains we can change how we function and think. Right now, most medical interventions are done for therapy, but in the future, we'll have to consider enhancement, as well. At some point we could potentially even change human nature.

That’s a tremendous power, one that must be matched with serious reflection on ethical principles such as dignity, fairness, and the common good. The temptation to power without ethics is something we need to avoid now more than ever. If one is powerful without goodness, one becomes dangerous and capable of very evil actions. In fact, such dangerous power may well destroy itself and perhaps take many innocent lives with it.

As long as there is technological progress, technology ethics is not going to go away; in fact, questions surrounding technology and ethics will only grow in importance. As we travel this path into the future together, we will choose the kind of future we create. Given our growing technological power, we need to put more and more attention towards ethics if we want to live in a better future and not a worse one.

This article is adapted from the video What Is Technology Ethics?

Navigate here to View Materials Ethics in Tech PracticeConceptual Frameworks
Navigate here to View Materials Ethics in Tech Practice
Ethics in Tech Practice

Ethics in Technology Practice aims to provide free materials to encourage and support ethics training workshops in technology companies.

Navigate here to View Materials Ethics in Tech PracticeView Materials
Navigate here to Learn More Technology and the Ethical Imagination
Technology and the Ethical Imagination

The Markkula Center for Applied Ethics and The Tech Museum of Innovation collaborate on integrating ethics into the museum's exhibits, events, and educational programs.

Navigate here to Learn More Technology and the Ethical ImaginationLearn More
Navigate here to Learn More IT, Ethics, and Law Lecture Series
IT, Ethics, and Law Lecture Series

Since 2005, the Ethics Center has collaborated with the SCU High Tech Law Institute to sponsor "IT, Ethics, and Law"—a series of presentations on topics in information technology. Speakers have included Jonathan Zittrain, co-founder of the Berkman Center for Internet and Society; Craig Newmark, founder of craigslist; and Kara Swisher, journalist.

Navigate here to Learn More IT, Ethics, and Law Lecture SeriesLearn More
Navigate here to Learn More Partnership on AI
Partnership on AI

The Partnership focuses on artificial intelligence, bringing together companies such as Amazon, Facebook, Google, and Apple, with academic and research organizations and nonprofits, to collaborate on addressing common concerns.

Navigate here to Learn More Partnership on AILearn More