Skip to main content
Markkula Center for Applied Ethics

On AI and the Need for Consumer (Consumed?) Protection

phone on bed with a cat in the background

phone on bed with a cat in the background

We need more innovation and disruption—from different constituencies.

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

 

Note: On March 21, California state senator Bill Dodd led a town hall discussion focused on the rapid expansion of AI deployment, its implications for privacy and consumer protection, and guardrails that might be needed in response. I was one of the panelists invited to participate. The following are my opening comments, lightly edited—followed by some related in-depth resources. A video of the full town hall conversation is accessible now.

I want to start by stressing that the topic of AI is very broad. Different AI models, developed based on very different data sets, and deployed in very different contexts, for different types of decision-making, raise a variety of ethical and legal issues, and pose very different risks and benefits. Using AI to try to anticipate earthquakes is quite different from using it to try to determine whether a particular human being will commit a crime, or to create an image of Winnie the Pooh in the style of Dali (the painter, not the AI model).

Having said that, I’m very glad that we are focusing today on privacy and consumer protection. So many AI models have been trained on personal data, amassed by companies or academic researchers for purposes that the initial data generators never imagined, let alone consented to.

Remember when the talk was all about data being the new oil? Back before we had image-generative AI, I once asked a friend to make me a drawing of a fluffy sheep in a field, with an oil well behind it—because when it comes to personal data, I think we’re more like sheep being shorn, over and over again, than we are like owners of our own individual oil wells. When we talk about AI and consumer protection, who are the consumers? Is it the companies who purchase AI solutions? Are those of us who are subjected to decisions made by those AI tools less like consumers and more like the consumed?

By the way, every time you hear the word “data,” you need to mentally append the term “cybersecurity.” Cybersecurity is now one of the conditions required for the common good, and the fact that it’s still insufficiently addressed and funded is a grave danger to all of us.

When it comes to guardrails for AI, I would like to point the audience to the Blueprint for an AI Bill of Rights, issued in October of last year by the White House Office of Science and Technology. There is a lot in it, including suggestions for translating key principles into regulation as well as into business practices. But I want to just highlight the five key principles that it lists:

    • You should be protected from unsafe or ineffective [AI] systems.
    • You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
    • You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
    • You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
    • You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Those sound extremely aspirational today. Compare that list with the current reality, in which consumers lack power and recourse, but in which AI systems already play a role in screening job and loan applications, grading student essays, or suggesting who might get or not get to be released on bail.

In California and a few other states, citizens have gained some power, given recent privacy laws—but the push for a federal privacy law so far has included pressure to preempt those laws.

There is a lot of work to be done, by legislators, technologists, advocates, academics, and all of us who are impacted by AI tools. In a time when at least one new chatbot seems to be released every day, we need some more innovation and disruption: Regulatory innovation, and the disruption, by civil society, of the current status quo in AI development and deployment.

Some additional related reading, with far more details and insights about AI and the push for consumer/citizen protection in a variety of contexts:

Blueprint for an AI Bill of Rights: Issued by the White House Office of Science and Technology Policy in October 2022: https://www.whitehouse.gov/ostp/ai-bill-of-rights/

How California and Other States are Tackling AI Legislation: Commentary via Brookings Institute, published on March 22, 2023: https://www.brookings.edu/blog/techtank/2023/03/22/how-california-and-other-states-are-tackling-ai-legislation/ 

“Chatbots, Deepfakes, Voice Clones: Deception at Scale”: FTC blog post, published on March 20, 2023: https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale

“Understanding Social Media Recommendation Algorithms”: Article by Professor Arvind Narayanan, published in March 2023:  https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms

“States’ Automated Systems Are Trapping Citizens in Bureaucratic Nightmares with Their Lives on the Line”: Article from 2020: https://time.com/5840609/algorithm-unemployment/

Image: "data security privacy" by Book Catalog (cropped) is licensed under CC BY 2.0.

Mar 29, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: