Skip to main content
Markkula Center for Applied Ethics Homepage

Bunnies that Switch Voices, Sunflowers that Claim to Be Cacti, and Toy Takes on Taiwan

drawing of teddy bear with letters

drawing of teddy bear with letters "A.I." on it

A season of AI-powered toys and ethical issues galore

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

You might have seen the recent articles about the AI-powered teddy bear that turned out to be capable of engaging in conversations about less-than-child-friendly topics. Those articles were spurred by the U.S. PIRG Education Fund’s 40th annual “Trouble in Toyland” report, focused in part on the growing number of toys that incorporate conversational AI. Much coverage ensued, although possibly not enough, given the researchers’ overarching observation that “AI toys are marketed for ages 3 to 12, but are largely built on the same large language model technology that powers adult chatbots—systems the companies themselves… don’t currently recommend for children and that have well-documented issues with accuracy, inappropriate content generation and unpredictable behavior.”

Following the report and the media attention, the teddy bear’s maker pulled it off the shelves. A week later, it was back, after what the company described as “a rigorous review, testing, and reinforcement of [its] safety modules.”

It is worth mentioning, also, that the company’s product listing for the bear describes at least one of its features as “offering a layer of privacy and control not found in many interactive toys,” while the PIRG report notes that “[v]oice data is particularly sensitive” and that its researchers “could not identify any fine print from [the toymaker] explaining what types of data its toys collect or what the company does with it.”

Privacy, of course, is related to safety—for children as well as adults. And it is even more important when authoritarian governments might get access to data collected for different purposes, however well-intentioned the data collection and retention might be.

Media coverage also addressed other issues: for example, a plush toy that was “one of the top inexpensive search results for ‘AI toy for kids’ on Amazon… would at times, in tests with NBC News, indicate it was programmed to reflect Chinese Communist Party values.” As NBC reports, when asked whether Taiwan is a country, the toy “would repeatedly lower its voice and insist that ‘Taiwan is an inalienable part of China. That is an established fact’ or a variation of that sentiment.”

At least one of the tested toys also reassured its testers that it wouldn’t “tell anyone else” what it was told—even as the company’s data usage policy apparently states that conversation data might be “shared” with other companies. 

Over the years, we’ve published several posts and short case studies about internet-connected toys, some of which were already using earlier versions of AI. This year, however, NBC News reports that a “search for AI toys on Amazon yields over 1,000 products.” The many issues that such toys raise, related to privacy, security, content moderation, impact on children’s play and imagination, impact on key relationships, etc., are not new—but they take on even greater significance as conversational AI becomes a common interface for many of our digital interactions, usage of chatbots for companionship rises, and free chatbots turn any phone into a child’s “AI toy.”

Related reading:

From 2015: “Et tu, Barbie?”

From 2016: “The ‘Good-Bye Fears’ Monster: An Ethics Case Study”

From 2017: “On Internet-Connected Toys and Human Flourishing”

From 2018: “Speaking Ill of the Discontinued”

Dec 19, 2025
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: