Ethical Lenses to Look Through
Shannon Vallor, Brian Green, Irina Raicu
Conceptual frameworks drawn from ethical theories that have shaped the study of ethics over centuries can help you recognize ethical issues when you encounter them, and can also help you to describe them. In this way, a basic grasp of ethical theory can work like a field guide for practical ethical concerns.
Ethical Lenses to Practice Looking Through in Engineering/Design Work:
There are many ethical theories available to frame our thinking; here we focus on just a few broad types that are widely used by both academic and professional ethicists, including tech ethicists in particular: deontological, consequentialist, and virtue ethics. However, these are ethical theories developed largely in the context of Western philosophical thought; our selection must not be taken as a license to ignore the rich range of ethical theories and frameworks available globally and in other cultural contexts. A brief discussion of such frameworks is included in p. 10-12 of this guide; see also (Vallor 2016).
Each section below includes a brief overview of the ethical theory and its best known defenders, examples of its relevance to technologists, and a list of helpful questions that technologists/design teams can ask to provoke ethical reflection through that lens.
Deontological ethical frameworks focus on moral rules, rights, principles and duties. They tend to be more universalist than other kinds of ethical theories (that is, the rules/principles are usually intended to apply to all possible cases). This creates a special kind of challenge when we encounter rules, rights, principles, or duties that conflict, that is, where it is impossible to follow one moral rule without breaking another, or to fulfill our moral duties to one stakeholder without failing to fulfill a moral duty to another. In such cases we must determine which rules, rights, principles, and duties carry the most ethical force in that situation, and/or find a solution that minimizes the ethical violation when ‘no-win’ scenarios (often called ‘wicked’ moral situations) occur. Thus deontological systems still require careful ethical reflection and judgment to be used well; they are not a simple moral checklist that can be mindlessly implemented.
Deontological systems of ethics range from the most general, such as the ‘golden rule,’ to theories such as that of W.D. Ross (1877-1971), that offer a longer list of seven pro tanto moral duties (duties that can only be overridden by other, morally weightier duties). W.D. Ross’ duties include fidelity; reparation; gratitude; justice; beneficence; non-injury; and self-improvement.
Even more widely used by ethicists is the categorical imperative, a single deontological rule presented by Immanuel Kant (1724-1804) in three formulations. The two most commonly used in practical ethics are the formula of the universal law of nature, and the formula of humanity. The first formula tells us that we should only act upon principles/maxims that we would be willing to impose upon every rational agent, as if the rule behind our practice were to become a universal law of nature that everyone, everywhere, had to follow. If we would reject such a universal law (e.g., ‘every rational person shall seize whatever property attracts them’), then our principle is morally corrupt and we must not act upon it ourselves (‘I should not grab this man’s wallet’). The second formula, the formula of humanity, states that I should always treat other persons as ends in themselves (dignified beings to be respected), never merely as means to an end (i.e., never as mere tools/objects to be manipulated for my purposes). So, benefiting from my action toward another person is only permissible if their own autonomy and dignity is not violated in the process, and if the person being treated as a means would consent to such treatment as part of their own autonomously chosen ends (a student may thus benefit from their teacher’s instruction and mentoring without it being immoral, as the teacher freely chooses to provide these benefits as part of their own life goals, whereas the teacher presumably would not consent to the student benefiting by cheating on the exam.)
The ethical issues and concerns frequently identifiable from within this framework, that is, by looking through this kind of ethical ‘lens,’ include, but are not limited to:
Autonomy (the extent to which people can freely choose for themselves)
Dignity (the extent to which people are valued in themselves, not as objects with a price)
Rights (people’s entitlements to natural, human, civil, economic, & moral protection)
Duties (our obligations to care for, respect, preserve, or aid the interests of others)
Justice (the social fulfillment of moral rights and duties and protection of rights/dignity)
Fairness (a morally justifiable distribution of benefits, goods, harms, and risks)
Transparency (honest, open, and informed conditions of social treatment/distribution)
Universality/Consistency (holding all persons and actions to the same moral standards)
Examples of Deontological Ethical Issues in Tech Practice:
In what way does a virtual banking assistant that is deliberately designed to deceive users, for example by actively representing itself as a human, violate a deontological moral rule or principle, such as the Kantian imperative to never treat a person as a mere means to an end? Would people be justified in feeling wronged by the bank upon discovering the deceit, even if they had not been financially harmed by the software? Does a participant in a commercial financial transaction have a moral right not to be lied to, even if a legal loophole means there is no legal right violated here?
Or, how does a digital advertising app that allows people to place custom housing or job ads that target only people under 40, or only people in specific zip codes, impact fairness and justice?
Looking through the deontological lens means anticipating contexts in which violations of autonomy, dignity, fairness, or trust might show up, regardless of whether there was malign intent or whether material harms were done to people’s interests. Many violations of such duties in the tech sector can be avoided by seeking meaningful and ongoing consent from those who are likely to be impacted (not necessarily just the end user) and offering thorough transparency about the design, terms, and intentions of the technology.
However, it is important to remember that deontological concerns often need to be balanced with other kinds of concerns. For example, autonomy is not an unconditional good (you don’t want to empower your users do anything they want). When user autonomy poses unacceptable moral risks, you need to balance this value with appropriately limited moral paternalism (which is also unethical in excess.) An excellent example of this is the increasingly standard design requirement for strong passwords.
Deontological Questions for Technologists that Illuminate the Ethical Landscape:
What rights of others & duties to others must we respect in this context?
How might the dignity & autonomy of each stakeholder be impacted by this project?
What considerations of trust and of justice are relevant to this design/project?
Does our project treat people in ways that are transparent & to which they would consent?
Are our choices/conduct of the sort that I/we could find universally acceptable?
Does this project involve any conflicting moral duties to others, or conflicting stakeholder rights? How can we prioritize these?
Which moral rights/duties involved in this project may be justifiably overridden by higher ethical duties, or more fundamental rights?
Consequentialist ethics can seem fairly straightforward—if you want to know if an action is ethical, this framework tells us, then just look at its consequences. It can be difficult to know what kinds of consequences will morally justify an act, and even harder to know what the consequences of our actions will actually be. But in many cases we do know, or have a pretty good idea, and in many such cases the consequences can be readily seen as morally choiceworthy (‘this is the best thing for us to do’) morally permissible (‘it’s not wrong for us to do this’), or morally impermissible (‘we shouldn’t do this, it’s wrong’). When the moral consequences of a technological choice are sufficiently foreseeable, we have an ethical responsibility to consider them.
Utilitarians are the best known type of consequentialist. Utilitarian ethics, in its most complete formulation by John Stuart Mill (1806-1873), asks us to weigh the overall happiness or welfare that our action is likely to bring about, for all those affected and over the long term. Happiness is measured by Mill in terms of aggregate pleasure and the absence of pain. Physical pleasure and pain are not the most significant metrics, although they count; but Mill argues that at least for human beings, intellectual and psychological happiness are of an even higher moral quality and significance.
Utilitarianism is attractive to many engineers because in theory, it implies the ability to quantify the ethical analysis and select for the optimal outcome (generating the greatest overall happiness with the least suffering). In practice, however, this is often an intractable or ‘wicked’ calculation, since the effects of a technology tend to spread out indefinitely in time (should we never have invented the gasoline engine, or plastic, given the now devastating consequences of these technologies for the planetary environment and its inhabitants?); and across populations (will the invention of social media platforms turn out to be a net positive or negative for humanity, once we take into account all future generations and all the users around the globe yet to experience its consequences?)
The requirements to consider equally the welfare of all affected stakeholders, including those quite distant from us, and to consider both long-term and unintended effects (where foreseeable), make utilitarian ethics a morally demanding standard. In this way, utilitarian ethics does NOT equate to or even closely resemble common forms of cost-benefit analysis in business, where only physical and/or economic benefits are considered, and often only in the short-term, or for a narrow range of stakeholders. Thus many people who think they are just being ‘good utilitarians’ when making narrow cost-benefit analyses of a business practice are deeply mistaken. Moral consequences go far beyond economic good and harm. They include not only physical, but psychological, emotional, cognitive, moral, institutional, environmental, and political well-being or injury, or degradation.
Another way to frame a consequentialist analysis is to focus on the common good, instead of the aggregate welfare/happiness of individuals. The distinction is subtle but important. Utilitarians consider likely injuries or benefits to discrete individuals, then sum those up to measure aggregate social impact. But common good consequentialists look at the impact of a practice on the health and welfare of communities or humanity as functional wholes. Welfare as measured here goes beyond personal happiness to include things like political and public health, security, liberty, sustainability, education, or other values deemed critical to flourishing community life. Thus a technology that might seem to satisfy a utilitarian (by making most individuals personally happy, say through neurochemical intervention) might fail the common good test if the result was a loss of community life and health (for example, if those people spent their lives detached from others--like addicts drifting in a technologically-induced state of personal euphoria).
Common good ethicists will also look at the impact of a practice on morally significant institutions that are critical to the life of communities, for example, on government, the justice system, or education, or on supporting ecosystems and ecologies. Common good frameworks help us avoid notorious tragedies of the commons, where rationally seeking to maximize good consequences for every individual leads to damage or destruction of a system that those individuals depend upon to thrive. Common good frameworks also have more in common with cultural perspectives, such as those widespread in East Asia, in which promoting social harmony and stable functioning may be seen as more ethically important than maximizing the autonomy and welfare of isolated individuals.
Reconciling the utilitarian and common good approaches is complex; for practical purposes we can view them as complementary lenses that provoke us to consider both individual and communal welfare, even when these are in tension. Since there is often no ‘easy’ consequentialist analysis, we must muddle through and do our best to think about the consequences that are most ethically significant, severe, and/or likely to occur. Using the consequentialist lens while doing tech ethics is like watching birds while using special glasses that help us zoom out from an individual bird to survey a dynamic network (the various members of a moving flock), and then try to project the overall direction of the flock’s travel as best we can (is this project overall going to make people’s lives better?), while still noticing any particular members that are in special peril (are some people going to suffer greatly for a trivial gain for the rest?).
Even though, in some sense, consequentialism is actually the hardest to employ (because of the uncertainties in the calculation, and because we are supposed to choose the most benevolent or beneficial choice available), it is one that the public will expect technologists to consider in any design context. Outraged public responses such as: “How did you not know that weak default login passwords/dash-mounted video players/facial-recognition algorithms trained mostly on white faces, etc., would result in people suffering harm?” reflect the same moral criticism as the question “How did you not know that this highly toxic chemical you included in a popular cosmetic would damage public health?” People rightly expect designers to think about the moral consequences of their design choices for all those affected, downstream as well as in the short-term primary use case.
Example of Consequentialist Ethical Issues in Tech Practice:
When we designed apps to be ‘sticky,’ to keep users coming back to our apps or devices, we told ourselves that this can’t possibly be doing our users moral harm. After all, they wouldn’t come back to the app unless it was giving them some benefit they wanted, right? And they wouldn’t do it if that benefit weren’t worth more to them than whatever they were taking their time and attention away from, correct?
By now, most technologists have had to abandon this level of ethical naivete about issues like technology addiction. While deontological issues are also in play here (since addiction compromises people’s cognitive autonomy), even a consequentialist reading tells us we went wrong. Leading device manufacturers now increasingly admit this by building tools to fight tech addiction. We have graduated to a more sophisticated consequentialist perspective. From this perspective, the rising tide of depression, isolation, and anxiety among those at risk from technological addiction shows us where we went wrong.
We falsely assumed that people’s technological choices were reliably correlated with their increasing happiness and welfare. People make themselves unhappy with their choices all the time, and we are subject to any number of mental compulsions that drive us to choose actions that promise happiness but will not deliver, or will deliver only a very short-term, shallow pleasure while depriving us of a more lasting, substantive kind. We also now know that technology addiction can harm the common good, through damaging family and civic ties and institutions that are essential for healthy communities. Addiction-by-design is morally wrong, inexcusably so--and had we used a richer, more careful consequentialist framework for thinking about design, we might have seen that sooner.
Consequentialist Questions for Technologists that Illuminate the Ethical Landscape:
Who will be directly affected by this project? How?Who will be indirectly affected?
Will the effects in aggregate likely create more good than harm, and what types of good and harm? What are we counting as well-being, and what are we counting as harm/suffering?
Is our view of these concepts too narrow, or are we thinking about all relevant types of harm/benefit (psychological, political, environmental, moral, cognitive, emotional, institutional, cultural, etc.)?
How might future generations be affected by this project?
What are the most morally significant harms and benefits that this project involves?
Does this project benefit many individuals, but only at the expense of the common good?
Does it do the opposite, by sacrificing the welfare or key interests of individuals for the common good? Have we considered these tradeoffs, and which are ethically justifiable?
Do the risks of harm from this project fall disproportionately on the least well-off or least powerful in society? Will the benefits of this project go disproportionately to those who already enjoy more than their share of social advantages and privileges?
Have we adequately considered ‘dual-use’ and downstream effects other than those we intend?
Have we fallen victim to any false dilemmas or imagined constraints? Or have we considered the full range of actions/resources/opportunities available to us that might boost this project’s potential benefits and minimize its risks?
Are we settling too easily for an ethically ‘acceptable’ design or goal (‘do no harm’), or are there missed opportunities to set a higher ethical standard and generate even greater benefits?
Virtue ethics is more difficult to encapsulate than either deontological or consequentialist frameworks, which can be a hindrance but also a source of practical richness and power. Virtue ethics essentially recognizes the necessary incompleteness of any set of moral rules or principles, and the need for people with well-habituated virtues of moral character and well-cultivated, practically wise moral judgment to fill the gap.
Aristotle (384-322 B.C.) was correct when he stated that ethics cannot be approached like mathematics; there is no algorithm for ethics, and moral life is not a well-defined, closed problem for which one could design a single, optimal solution. It is an endless task of skillfully navigating a messy, open-ended, constantly shifting social landscape in which we must find ways to maintain and support human flourishing with others, and in which novel circumstances and contexts are always emerging that call upon us to adapt our existing ethical heuristics, or invent new, bespoke ones on the spot. Virtue ethics is benefiting from a recent renewal of popularity, especially among technology ethicists, precisely for its power to navigate unprecedented ethical landscapes for which our existing moral rules and principles may not have adequately prepared us.
Virtue ethics does offer some guidance to structure the ethical landscape. It asks us to identify those specific virtues—stable traits of character or dispositions—that morally excellent exemplars in our context of action consistently display, and then identify and promote the habits of action that produce and strengthen those virtues (and/or suppress or weaken the opposite vices). So, for example, if honesty is a virtue in designers and engineers (and a tendency to falsify data/results a vice), then we should think about what habits of design practice tend to promote honesty, and encourage those. As Aristotle says, ‘we are what we repeatedly do.” We are not born honest or dishonest, but we become one or the other only by forming virtuous or vicious habits with respect to the truth.
Virtue ethics is also highly context-sensitive; each moral response is unique and even our firmest moral habits must be adaptable to individual situations. For example, a soldier who has a highly-developed virtue of courage will in one context run headlong into a field of open fire while others hang back in fear; but there are other contexts in which a soldier who did that would not be courageous, but rash and stupid, endangering the whole unit. The virtuous soldier reliably sees the difference, and acts accordingly—that is, wisely, finding the appropriate ‘mean’ between foolish extremes (in this case, the vices of cowardice and rashness), where those are always relative to the context (an act that is rash in one context may be courageous in another).
While utilitarian and consequentialist frameworks will focus our moral attention outward, onto our future technological choices and/or their consequences, virtue ethics reminds us to also reflect inward—on who we are, who we want to become as morally expert technologists, and how we can get there.
It also asks us to describe the model of moral excellence in our field that we are striving to emulate, or even surpass. What are the habits, skills, values, and character traits of an exemplary engineer, or an exemplary designer? What happens when we discover that an accomplished, highly regarded engineer possessed of great technical prowess and intelligence has habitually been falsifying documentation out of laziness or spite? Or using subpar materials to meet the customer specs or delivery date? Or ignoring, dismissing, and covering up serious risks to public safety or welfare that junior members of the team brought to her attention? Even if no one was (yet) harmed by these habitual failings, discovering them would greatly diminish the professional and moral excellence of that person, and we would be right to hesitate to put them in charge of the next big project.
Virtue ethics also incorporates a unique element of moral intelligence, called practical wisdom, that unites several faculties: moral perception (awareness of salient moral facts and events), moral emotion (feeling the appropriate moral responses to the situation), and moral imagination (envisioning and skillfully inventing appropriate, well-calibrated moral responses to new situations and contexts).
Finally, virtue ethics is a type of moral framework that cuts across many cultural traditions. While each cultural tradition has its own vision of the good life, and of the ‘ideal moral character’ and its virtues, the structures of different cultural instantiations of virtue ethics, (from ancient Greek to classical Confucian, Buddhist, or Judeo-Christian and modern European,) share many family resemblances, especially the emphasis on moral habituation, self-cultivation, and practical wisdom (see Vallor 2016.)
Examples of Virtue-Ethical Issues in Tech Practice:
Did giving people access to news sources that were subsidized only by online advertising, elevated to mass visibility by popularity and page views rather than by professional journalists, and vulnerable to being gamed by armies of bots, trolls, and foreign adversaries, help to make us wiser, more honest, more compassionate, and more responsible citizens? Or did it have very different effects on our intellectual and civic virtues?
There is probably no better conceptual lens than virtue ethics for illuminating the problematic effects of the attention economy and digital media. It helps to explain why we have seen so many pernicious moral effects of this situation even though the individual acts of social media companies appeared morally benign; no individual person was wronged by having access to news articles on various social media platforms, and even the individual consequences didn’t seem so destructive at the time. What happened, however, is that our habits were gradually altered in such a way that the civic virtues that our old media habits (very imperfectly) sustained were left to degrade when those habits were pushed out by new media habits not designed to sustain the same civic function.
Not all technological changes must degrade our virtues, of course. Consider the ethical prospects of virtual-reality (VR) technology, which are still quite open. As VR environments become commonplace and easy to access, might people develop stronger virtues of empathy, civic, care, and moral perspective, by experiencing others’ circumstances in a more immersive, realistic way? Or will they instead become even more numb and detached, walking through others’ lives like players in a video game? Most important is this question: what VR design choices would make the first, ethically desirable outcome more likely than the second, ethically undesirable one?
Virtue-Driven Questions for Technologists that Illuminate the Ethical Landscape:
What design habits are we regularly embodying, and are they the habits of excellent designers?
Would we want future generations of designers to use our practice as the example to follow?
What habits of character will this design/project foster in users and other affected stakeholders? Will this design/project weaken or disincentivize any important human habits, skills, or virtues that are central to human excellence (moral, political, or intellectual)? Will it strengthen any?
Will this design/project incentivize any vicious habits or traits in users or other stakeholders?
Are our choices and practices generally embodying the appropriate ‘mean’ of design conduct (relative to the context)? Or are they extreme (excessive or deficient) in some ways?
What are the relevant social contexts that this project/design will enter into/impact? Has our thinking about its impact been too focused on one context, to the exclusion of other contexts where its impact may be very different in morally significant ways?
Is there anything unusual about the context of this project that requires us to reconsider or modify the normal ‘script’ of good design practice? Are we qualified and in a position to safely and ethically make such modifications to normal design practice, and if not, who is?
What will this design/project say about us as people in the eyes of those who receive it? How confident are we that we will individually, and as a team/organization, be proud to have our names associated with this project one day?
Has our anticipated pride in this work (which is a good thing) blinded us to, or caused us to heavily discount, any ethically significant risks we are taking? Or are we seeing clearly?
Global Ethical Perspectives
There is no way to offer an ‘overview’ of the full range of ethical perspectives and frameworks that the human family has developed over the millennia since our species became capable of explicit ethical reflection. What matters is that technologists remain vigilant and humble enough to remember that whatever ethical frameworks may be most familiar or ‘natural’ to them and their colleagues, they amount to a tiny fraction of the ways of seeing the ethical landscape that their potential users and impacted communities may adopt. This does not mean that practical ethics is impossible; on the contrary, it is a fundamental design responsibility that we cannot make go away. We turn our back on it or we attempt to fulfill it: those are the only choices, and only the latter can be justified.
But it is helpful to remember that the moral perspectives in the conference room/lab/board meeting are never exhaustive, and that they are likely to be biased in favor of the moral perspectives most familiar to whoever happens to occupy the dominant cultural majority in the room, company, or society. Yet the technologies we build don’t stay in our room, our company, our community, or our nation.
New technologies seep outward into the world and spread their effects to peoples and groups who, all too often, don’t get a fair say in the moral values that those technologies are designed to reinforce or undermine in their communities. And yet, we cannot design in a value-neutral way—that is impossible, and the illusion that we can do so is even more dangerous than knowingly imposing values on others without their consent, because it does the same thing, just without the accountability.
You will design with ethical values in mind; the only question is whether you will do so in ways that are careful, reflective, explicit, humble, transparent, and responsive to stakeholder feedback, or in ways that are arrogant, opaque, and irresponsible.
While moral and intellectual humility requires us to admit that our ethical perspective is always incomplete and subject to cognitive and cultural blind spots, the processes of ethical feedback and iteration described in part 7 of this project’s Ethical Toolkit can be calibrated to invite a more diverse/pluralistic range of ethical feedback, as our designs spread to new regions, cultures, and communities.
Examples of Global Ethical Issues in Tech Practice:
Facial-recognition, social media, AI and other digital technologies are being used in a grand project of social engineering in China to produce a universal ‘social-credit’ system in which the social status and privileges of individuals will be powerfully enhanced or curtailed depending on the ‘score’ that that the government system assigns them as a measure of their social virtue--their ability to promote a system of ‘social harmony.’
Many Western ethicists view this system as profoundly dystopic and morally dangerous, but within China many will embrace it, within a cultural framework that values social harmony as the highest moral good. How should technologists respond to invitations to assist China in this project, or to assist other nations who might want to follow China’s lead? What ethical values should guide them? Should they simply accede to the local value-system, or be guided by their own personal values, or the values of the nation in which they reside, or the ethical principles set out by their company, if there are any?
This example illustrates the depth of the ethical challenge presented by global conflicts of ethical vision. But notice that there is no way to evade the challenge. A decision must be made, and it will not be ethically neutral no matter how it gets made. A decision to ‘follow the profits’ and ‘put ethics aside’ is not a morally neutral decision, it is one that assigns profit as the highest or sole value. That in itself is a morally-laden choice for which one is responsible, especially if it leads to harm. Every designer, engineer, manager, and leader of a company needs to begin to think about what values and principles they want to be defined and remembered by, for better or for worse.
And it may be helpful, where possible, to seek ethical dialogue across cultural boundaries and begin to seek common ground with technologists in other cultural spaces. Such dialogues will not always produce ethical consensus, but they can help give shape to the conversation we must begin to have about the future of global human flourishing in a technological age, one in which technology increasingly links our fortunes together.
Questions for Technologists that Illuminate the Global Ethical Landscape:
Have we invited and considered the ethical perspectives of users and communities other than our own, including those quite culturally or physically remote from us? Or have we fallen into the trap of “designing for ourselves”?
How might the impacts and perceptions of this design/project differ for users and communities with very different value-systems and social norms than those local or familiar to us? If we don’t know, how can we learn the answer?
The vision of the ‘good life’ dominant in tech-centric cultures of the West is far from universal. Have we considered the global reach of technology and the fact that ethical traditions beyond the West often emphasize values such as social harmony and care, hierarchical respect, honor, personal sacrifice, or social ritual far more than we might?
In what cases should we refuse, for compelling ethical reasons, to honor the social norms of another tradition, and in what cases should we incorporate and uphold others’ norms? How will we decide, and by what standard or process?
References and Further Reading
Aristotle [350 B.C.E.] (2014). Nicomachean Ethics: Revised Edition. Trans. Roger Crisp. Cambridge: Cambridge University Press.
Ess, Charles (2014). Digital Media Ethics: Second Edition. Cambridge: Polity Press.
Jasanoff, Sheila (2016). The Ethics of Invention: Technology and the Human Future. New York: W.W. Norton.
Kant, Immanuel.  (1997). Groundwork of the Metaphysics of Morals. Trans. Mary Gregor. Cambridge: Cambridge University Press.
Lin, Patrick, Abney, Keith and Bekey, George, eds. (2012). Robot Ethics. Cambridge, MA: MIT Press.
Lin, Patrick, Abney, Keith and Jenkins, Ryan, eds. (2017). Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. New York: Oxford University Press.
Mill, John Stuart  (2001). Utilitarianism. Indianapolis: Hackett.
Robison, Wade L. (2017). Ethics Within Engineering: An Introduction. London: Bloomsbury.
Selinger, Evan and Frischmann, Brett (2017). Re-Engineering Humanity. Cambridge: Cambridge University Press.
Tavani, Herman (2016). Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing, Fifth Edition. Hoboken: Wiley.
Sandler, Ronald (2014) Ethics and Emerging Technologies. New York: Palgrave
Macmillan. Shariat, Jonathan and Saucier, Cynthia Savard (2017). Tragic Design: The True Impact of Bad Design and How to Fix It. Sebastopol: O’Reilly Media.
Vallor, Shannon (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press.
Van de Poel, Ibo, and Royakkers, Lamber (2011). Ethics, Technology, and Engineering: An Introduction. Hoboken: Wiley-Blackwell.
The Ethics of Innovation (2014 post from Chris Fabian and Robert Fabricant in Stanford Social Innovation Review, includes 9 principles of ethical innovation) https://ssir.org/articles/entry/the_ethics_of_innovation
Ethics for Designers (Toolkit from Delft University researcher) https://www.ethicsfordesigners.com/
The Ultimate Guide to Engineering Ethics (Ohio University) https://onlinemasters.ohio.edu/ultimate-guide-to-engineering-ethics/
Code of Ethics, National Society of Professional Engineers
Markkula Center for Applied Ethics, Technology Ethics Teaching Modules (introductions to software engineering ethics, data ethics, cybersecurity ethics, privacy)
Markkula Center for Applied Ethics, Resources for Ethical Decision-Making