Should we demand more control over the integration of AI features into products that we use?
Image: Catherine Breslin & Tania Duarte / Better Images of AI/ CC BY 4.0 --cropped
Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.
A few years ago, when companies introduced new features into existing services and devices without letting people choose whether they wanted the feature or not, the language that accompanied the forced integration often included the phrase “surprise and delight.”
If you now use Google to search for “surprise and delight,” an AI overview tells you that the phrase “refers to the customer service or marketing strategy of exceeding customer expectations by providing unexpected, positive experiences.” That, of course, assumes that providers will have a pretty good feel for what customers perceive as positive.
The phrase (if not the practice) seems to be less prevalent these days; it turns out that many people are looking for products that will meet their needs without changing in surprising ways along the way. (It also turns out that the effort to understand customers well enough to determine what might delight them led to some people being unpleasantly surprised by the realization of their loss of online privacy and control.)
But we have now moved from the explicit “surprise and delight” to the uncommunicated startle-and-dismay phase (as in the case of an event-specific app randomly booking AI-generated meetings for attendees), wherein companies insert generative-AI features into familiar products (see Microsoft announcing that AI will soon be offered to fill in cells in Excel spreadsheets), often without clear explanations of what they do (see many articles about Meta’s chatbots)—especially of their limitations—and often without easy ways of turning them off.
For example, the introduction of AI overviews in Google search itself came as a surprise for many users (a lot has been written about the unexpected integration of AI in that context).
If you scroll down to the bottom of the extensive AI overview about “surprise and delight” marketing, you will find a line in small gray font that reads “AI responses may include mistakes,” with a link to “learn more.”
If you click on that link, you will reach a post titled “Find information in faster and easier ways with AI Overviews in Google Search”; partway down that page, you are invited to click on another link to “Learn about generative AI and its limitations.” What may be more surprising is a different line: “AI Overviews are a core Google Search feature... Features cannot be turned off. However, you can select the Web filter after you perform a search. This filter displays only text-based links without features like AI Overviews.” (Note that you might have to click on “More” in the list of filters at the top of the page in order to find the “Web” one.)
Mistakes aside, the widespread (sometimes unintentional) adoption of generative AI is having a massive and growing environmental impact, undermining efforts at combatting climate change. This does not come as a surprise, at least for those who are training and deploying generative AI at scale. In 2020, for example, an article titled “Deep Learning’s Carbon Emissions Problem” noted that “AI has a meaningful carbon footprint today, and if industry trends continue it will soon become much worse. Unless we are willing to reassess and reform today’s AI research agenda, the field of artificial intelligence could become an antagonist in the fight against climate change in the years ahead.”
And so the “surprise and delight” turns to “startle and dismay” when people learn more about the energy and water demands of data centers optimized for AI, about the public health impact of such data centers, and about statements like a recent one from OpenAI CEO Sam Altman, who told an interviewer, “I do guess that a lot of the world gets covered in data centers over time… But I don’t know, because maybe we put them in space.”
Googling that first sentence and Altman’s name brings up an AI overview. Within it, a “Context and implications” section notes, among other things, that “AI data centers have considerable energy needs” and some “sources predict they could account for 21% of global energy consumption by 2030.”
Delighted?
Image: Catherine Breslin & Tania Duarte / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/ -- cropped