Skip to main content
Markkula Center for Applied Ethics

Creating Trustworthy AI for the Environment: Transparency, Bias, and Beneficial Use

open laptop computer on a lawn of green grass

open laptop computer on a lawn of green grass

Angelus McNally

The public view of artificial intelligence (AI) can be quite fantastical -- many imagine a computer with all the powers of a human brain, able to rationalize decisions, interact with cognizance, and potentially rebel against its creator. In reality, AI applications are much more pedestrian. Commonplace AI includes Google Translate, email spam filters, and stock market trading algorithms. AI can also classify images in many different ways (like Google Image searches do), process human language into pieces that are understandable to a computer (as Siri or Alexa does), or analyze various pieces of data in a way that allows the model to make a conclusion or suggestion based on those givens. In short, AI models take information from users, then produce some type of desired result, based on the purpose of the model. Any given model learns about patterns in the data it looks at, then begins to understand how the pattern continues [1]. But can we understand how AI finds these patterns? Are the patterns biased? And can these powers of pattern recognition be used to benefit the environment? In this article, I will examine three ethically significant aspects of AI and how they relate to the environment. First I will look at the issue of transparency; second, the issue of bias; and third, the issue of beneficial use of AI for the environment. 

Regarding transparency, AI models can be categorized primarily in two ways -- as white box models or black box models [2]. The transparency of its functionality decides whether a model is a white box or black box. Questions that can help make this distinction include: Can an outside observer clearly see how a model took input data and arrived at a conclusion? Or is it difficult or impossible to tell what assumptions or decisions the model made when forming an output? White box models often closely resemble traditional statistics and are based on specific functions created to model trends of known data [3]. Black box models, on the other hand, are more difficult to examine. They take inputs and create outputs similar to those of white box models, but the actual “reasoning” that the model considers when making decisions is unknown. In this way, it is often difficult to understand why a black box model produced certain results, and if those results are somehow incorrect, it may be difficult to debug and fix the model. White box models can be layered and combined to improve the accuracy of an AI program, but this causes the resulting model to become less transparent and more like a black box [4]. In applications where transparency and trust of a system are prioritized, white box models are preferred. It is easy to assume a conflict between explainability and power of a model. In reality, models can be examined in various ways, but visualization methods of what is actually occurring within a model are still being developed and improved [5]. Generally, the most concrete ways to affect an AI model’s decisions is through manipulating its training data set.

The second issue is bias. When an AI model is trained, it begins to recognize and produce relationships between causes and effects. However, just as with traditional statistics, the reasons for these relationships aren’t always apparent. As outlined by Marc Botha in “The limits of artificial intelligence,” studies and corresponding data show a clear correlation between the amount of champagne one drinks and their longevity [6]. However, drinking champagne doesn’t cause one to live longer; those who can afford champagne regularly are more likely to be able to also afford robust healthcare. Clearly, there are a number of ethical concerns that arise from systems that only look at initial states and resulting outcomes to create new predictions. These issues may arise from bias within data, bias within the model, or a combination of both.

There are a number of notable examples of AI models producing incredibly biased or outright incorrect results when certain ethical concerns aren’t factored into the creation of the model or its dataset. One such example is of the Google Photos app labeling photos of black people as “gorillas” [7]. Google’s models are often pushed to the public, where they then learn from mistakes like these to become more accurate. However, the model clearly didn’t have the capacity to understand that its mistake wasn’t simple and insignificant as most others are, but instead was highly offensive. In this scenario, the AI behind Google’s image tagging mistook a human for another species. The ramifications of this decision were limited to within Google Photos, but in reality, if the image tagging model was used in a different situation wherein it had more power, the errors of the model could seriously perpetuate bias. Furthermore, the model enters the ethical realm when its biases concern human dignity and historically oppressed groups. While this example concerns AI’s relative ignorance of the nuances of human society, bias-fueled errors like this one manifest themselves in different ways throughout different applications of AI. This is because the creators of a model and the datasets they use for training are inherently biased in a variety of ways. These biases significantly impact the functionality of AI models intended to make fact-based conclusions. Examples like these are a reminder that AI is created by humans, and despite the power of various models, they still inherently contain the flaws of human society and thinking. Societal struggles are reflected in the model’s priorities. Descartes argued that an object can contain only as much perfection as its creators possess. With AI, we attempt to make something infinitely more perfect than we are, with abilities in certain actions that are considerably more powerful than ours are [8]. Labeling is one major way in which AI can reveal its flaws. However, it is not the only way AI can be used, nor the only way AI can be erroneous. Different applications of AI have different issues.

Third, one of the biggest and most vitally important fields that AI can be applied to is the environment. AI applications for the environment can be incredibly beneficial to the world. As both AI use and environmental conservationism rise in popularity, awareness of the link between the two will likely rise. This is already evident through the efforts of various data centers to lower energy usage, simultaneously lowering carbon emissions and operating costs [9]. However, AI and environmental issues collide in many other ways; dozens of case studies show how AI can be applied to data related to an environmental concern in order to analyze trends and determine ways to best support the ecosystem. When the ways in which AI can be applied to the environment expand, ethics become ever-more involved.

Most conceptions of AI are built on comparisons to human knowledge and abilities. Among humans, intelligence is valued and quantified [10]. The widely-known Turing Test evaluates a machine on how well it can emulate a human -- the “gold standard” -- which is also an inherently anthropocentric standard that ignores the environment. Unlike human abilities, AI systems tend to be niche -- tailored to a particular use and difficult to adapt to new situations predictably. A model skilled at complex math will not be able to understand a picture book whatsoever. Context is often what allows human intelligence to be so much more comprehensive than that of a machine. Thus, creators of AI strive to make AI models “think” more like they do by supplying them with more data and additional rules. While resemblances of data analysis can be seen in nature, there are very few obvious ties between the functionality of AI and the natural world. So, how can this seemingly-anthropocentric technology strike a balance between prioritizing humanity or the environment?

Before answering this question, one should ask how value is applied to humanity and the environment. Nature can be viewed as morally neutral, as it may be difficult to apply human notions of “good” and “bad” to nature. However, this evaluation is heavily debated as the intrinsic value question. Instead of considering whether value is inherent to nature or whether it has been ascribed to nature by humans, we can consider the priorities AI makes in environmental applications. These priorities involve larger deliberations within philosophy and science [11], like: what is our responsibility to the environment? And: How much power should governments have over the environment? These questions are outside the scope of this paper, but nevertheless warrant discussion, especially considering their relationships to AI.

One question stands out as particularly important, considering previous remarks on the human-centered nature of AI: where should AI systems draw the line when making decisions that prioritize either humanity or the environment? This question is encompassed by the broader study of environmental ethics, namely the consideration of how anthropocentric a perspective is [12]. There is no single answer to what a system should prioritize in all cases, so creators of AI systems must be careful to consider every decision their system makes as an ethical one. That way, potential adverse, unintended effects might be anticipated. With this approach, bias in the system, either against humans or against the environment, may become more noticeable.

One way to consider the ethical concerns regarding AI use for the environment is through the Markkula Center’s Framework for Ethical Decision Making [13]. The Framework consists of the following steps: Recognize an Ethical Issue, Get the Facts, Evaluate Alternative Options, Make a Decision and Test It, and Act and Reflect on the Outcome. 

The first of these steps requires a sense of morality, as an ethical issue cannot be identified without a sense of “right” and “wrong” accompanying particular actions. So, a system that makes moral decisions, whether a simple algorithm or an Artificial Moral Agent (AMA), needs to be able to either predict which actions will carry moral weight and weigh the morality of every action and its possible repercussions.

Acquiring facts about the situation is where a machine may excel, given that AI models are based on copious amounts of data. This is also where transparency is so important, as well as recognition of bias, so one can understand the facts and limitations of the machine learning model. Likewise, when supplied with the guidelines of certain ethical approaches, a model may be able to concoct various actions that fit each moral view. These ethical approaches include the Utilitarian, Rights, Justice, Common Good, and Virtue approaches. These approaches should all be used together; as stated in the framework itself, “each approach gives us important information with which to determine what is ethical in a particular circumstance” [13]. While each approach will have its own tradeoffs, each approach considers different facets of a situation and may consider decisions made in similar situations in the past. These past experiences are often what cause individuals to come up with distinct solutions to the same scenarios. Similarly, the training data for an AI model will affect the solutions it devises.

Any method of choice evaluation for the model will need to assign numerical weights to actions. This process of assignment will likely be started by humans, whose own ethical biases may come into play. It is much harder to consider a cold, emotionless machine as something more easily biased than a human, but in reality, the machine doesn’t know the nuances of society that would allow it to recognize its partiality [14]. In a race for market share and customers, the culture of algorithm creation also encourages power and speed over morality.

The last steps of the framework regard taking action and evaluating it. In some situations, the AI model may not be able to test a scenario before executing it. Researchers need to have faith in a model’s evaluative abilities before giving it the power to make decisions. However, when it comes to decision-making, a major difference between humans and machines is that humans are sometimes guided by illogical emotions that can lead to immoral behaviors. Despite being less familiar to most of us than other humans are, well-trained AI models are potentially incredibly dependable and tend not to cause surprises. Machines cannot emotionally betray you or be jealous or feel guilty. If ethical decisions are guided by rigorous guidelines, an AI model could be a more consistent decision maker than a human.

In conclusion, first returning to transparency, to effectively use AI for ethical decision making, especially in situations with potential environmental impact, a model should be first and foremost trustworthy. Measures of trustworthiness may come from a variety of sources depending on what a model is used for, but one starting point is ensuring that a model functions mostly as a white box. If a moral decision was made, then each possible decision must have been considered, and it should be evident why one choice was selected over the others. Second, on the issue of bias, AI systems should be as free of unjust bias as possible. This includes biases against the natural environment, which may be even harder to recognize and remove from one’s thinking than other forms of previously-identified bias. Third, on the issue of appropriate use, every situation should be considered for its concrete relevancy for ethical decision making using a method such as the Markkula Center for Applied Ethics Framework for Ethical Decision Making.

The incredible power of AI systems is evident both in current applications and in theoretical plans. As models become increasingly powerful and generally useful, they will become responsible for more aspects of human life. The decisions models make will go from not just solving issues of efficiency to deciding what is wrong and right. Inevitably, significant issues will crop up. Already heavily impacted, the environment could be greatly helped or hurt by AI that protects or abuses its resources. Whether one creates AI systems or merely uses them as a tool in some regard, one should be sure to understand the power they lend them and whether markers for trustworthiness can be observed, just as one would when working with a human.

[1] Botha, M. (2019, February 11). The limits of artificial intelligence. Retrieved from https://towardsdatascience.com/the-limits-of-artificial-intelligence-fdcc78bf263b

[2] Hulstaert, L. (2019, March 14). Machine learning interpretability techniques. Retrieved from https://towardsdatascience.com/machine-learning-interpretability-techniques-662c723454f3

[3] Darwiche, A., Adnan Darwiche University of California, & University of California, Association for Computing Machinery. (2018, September 1). Human-level intelligence or animal-like abilities? Retrieved from https://dl.acm.org/doi/10.1145/3271625

[4] Lavin, A. (2019, June 18). Interpreting AI Is More Than Black And White. Retrieved from https://www.forbes.com/sites/alexanderlavin/2019/06/17/beyond-black-box-ai/#13064cc849c4

[5] Olah, Chris, et al. (March 6, 2018). “The Building Blocks of Interpretability.” Distill. Retrieved from https://distill.pub/2018/building-blocks/

[6] Botha, M. (2019, February 11). The limits of artificial intelligence. Retrieved from https://towardsdatascience.com/the-limits-of-artificial-intelligence-fdcc78bf263b

[7] Barr, A. (2015, July 2). Google Mistakenly Tags Black People as 'Gorillas,' Showing Limits of Algorithms. Retrieved from https://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/

[8] Nolan, Lawrence. (2020, February 14). Descartes' Ontological Argument. Stanford Encyclopedia of Philosophy, Stanford University. Retrieved from plato.stanford.edu/entries/descartes-ontological/.

[9] Evans, Richard, and Jim Gao. (2016, July 29). DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. Deepmind. Retrieved from deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40.

[10] Davion, V. (2002). Anthropocentrism, Artificial Intelligence, and Moral Network Theory: An Ecofeminist Perspective. Environmental Values, 11(2), 163-176. Retrieved from www.jstor.org/stable/30301879

[11] Gibson, R. W. (1923). The Morality of Nature. United Kingdom: Putnam.

[12] Dewing, Sophia. (2018). The Necessity of an Ecocentric Environmental Ethic for Artificial Intelligence. Gonzaga University. Retrieved from https://zeelblog.files.wordpress.com/2018/05/the-necessity-of-an-ecocentric-environmental-ethic.pdf

[13] Santa Clara University. (2009) A Framework for Ethical Decision Making. Markkula Center for Applied Ethics. Retrieved from www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical-decision-making/

[14] Miller, Frederick & Katz, Judith & Gans, Roger. (2018). AI x I = AI2: The OD imperative to add inclusion to the algorithms of artificial intelligence. Retrieved from https://www.researchgate.net/publication/323830092_AI_x_I_AI2_The_OD_imperative_to_add_inclusion_to_the_algorithms_of_artificial_intelligence

 

Angelus McNally was a 2019-20 Environmental Ethics Fellow at the Markkula Center for Applied Ethics.

May 26, 2020
--