An AI-free zone?
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.
Last year, at the Consumer Electronics Show, Mattel proudly presented a product that was yet to be delivered: Aristotle. It would be an AI-powered device that would reside in a child’s room; it would start out as a baby monitor, but one also capable of playing soothing sounds and songs for babies; later, it would play games with and read stories to the toddlers; as the children grew, its functionality would change to answering questions and helping them with their homework in other ways… Consumer reaction was swift—and negative.
By October, Mattel announced that it had cancelled plans for Aristotle. (“Aristoddle,” suggested journalist Will Oremus in response to a tweet complaining about the device name.) “My main concerns about this technology,” explained Jennifer Radesky, a pediatrician, “is the idea that a piece of technology becomes the most responsive household member to a crying child, a child who wants to learn, or a child’s play ideas.”
Many children are already using internet-connected AI-powered digital assistants, though, and even if those assistants are not designed specifically for kids, some of their features are. Earlier this month, Wired magazine reported that, in response to parents worrying that their children are learning to order “bots” around and might start to do that to humans, too,
Amazon and Google both announced this week that their voice assistants can now encourage kids to punctuate their requests with ‘please.’ The version of Alexa that inhabits the new Echo Dot Kids Edition will thank children for ‘asking so nicely.’ Google Assistant's forthcoming Pretty Please feature will remind kids to ‘say the magic word’ before complying with their wishes.
Nothing wrong with politeness, of course (though enforced politeness that turns “please” into another “OK, Google” might not be the most educational thing). But are we slipping back toward Mattel’s Aristotle? Wired quotes John Havens, executive director of the the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: "I think it's reasonable to ask if parenting will become a skill that, like Go or chess, is better performed by a machine.”
Is it? And what I mean is not “is it reasonable,” but “is it such a skill”?
I don’t think so. Certainly not at its best. And what about other circumstances? Would an AI “parent” be better than no parent, for kids in that position? Would it be different if the technology were there not to allow frazzled busy parents to ignore their kids’ needs while they do non-parenting-related things, but to fill a void? The mind boggles. Would the benefits outweigh the harms?
Do we just need to designate some areas of human life as AI-free zones?
Last October, the ethicist David O’Hara, who teaches philosophy of religion, wrote a blog post in response to ethicist Evan Selinger, who had asked whether there are some jobs that would be unethical to automate. The blog post is titled “The Ethics of Automation: Poetry and Robot Priests.” In it, he mentions a company that has created a robot that cites biblical verses and offers blessings in multiple languages. O’Hara then riffs on the theme:
… can a meaningful confession be heard by someone who cannot sin…? Can a machine be a member of a church, or does it have something more like the status of a chalice or a chasuble – something the community uses liturgically but that does not have standing in the deliberations and practices of the community? Another important question: can a machine act as a vicar? That is, can a machine stand in as a representative of God and proclaim the forgiveness of God as we believe those who have been ordained may do?
And what about the writing of poetry? O’Hara again:
My concerns here are twofold: one has to do with the danger of persuasion: not much moves us as powerfully as poetry does. My second concern is about the importance of having our arts be the expressions of the heart of our communities. But I could be wrong: maybe robots should be writing poetry – their own poetry, from one machine to another.
O’Hara concludes that “We should use the technologies we have to serve those in need. … But we should not pretend that in so doing we have done all that we must do.”
So is the answer, then, that we need not “AI-free zones” but careful consideration, within each area in which human beings interact with each other, about which aspects of those interactions could be usefully automated without losing too much, and which aspects should not, ethically, be automated?
Shakespeare wrote of lunatics, lovers, and poets. Love cannot be conveyed through AI-powered machines; whether some day it might is very much open for debate and not directly relevant to the question at hand: what should we not hand over to AI, now? The singing of lullabies does more than soothe a crying baby; it does something to and for the singer, too, and for the relationship that builds between them. What are the relationships that Go or chess cannot begin to approximate, and that we should be very careful not to damage or delete?
Photo by tua ulamac, cropped, used under a Creative Commons license.