Skip to main content
Markkula Center for Applied Ethics

Media Mentions

A selection of articles, op-eds, TV segments, and other media featuring Ethics Center staff and programs.

The Markkula Center for Applied Ethics does not advocate for any product, company, or organization. Our engagements are intended to provide training, customized materials, and other resources. The Markkula Center does not offer certifications or seals of approval.

Ethics in the Age of Disruptive Technologies: An Operation Roadmap
What to ask About AI

While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”

Ann Skeet, senior director, leadership ethics, quoted by Corporate Board Member Magazine.

CMSWire Logo
AI and Ethics: Navigating the New Frontier

There are many key steps to ensuring the ethical application of AI in marketing. One of these steps involves developing and implementing ethical AI guidelines to protect consumers. 

"[B]eing unethical is a great way to lose consumer trust and ruin your business. At the more practical level, issues like safety, security, reliability, privacy, trustworthy data use, being unbiased, fair, inclusive, transparent, and accountable—these are the principles that you will find in various corporate AI ethics principles, and they are a good start," said Green.

Green spoke on the importance of upholding specific values in each field that utilizes AI. “These key values should shape the construction from AI systems from the ground up. In healthcare, AI needs to focus on patient health; in finance protecting the honest flow of money; in marketing, the honest sharing of ideas, including honestly sharing products.” 

Brian Green, director, technology ethics, quoted by CMSWire.

The Daily Upside Logo
Google Wants to Use Machine Learning to Keep AI Data Unbiased

With the increasing capabilities of AI models in collecting and analyzing data, there needs to be safeguards against biases. Google plans to patent and use a "clustering model" that works to group data and then balance types of data, with the hopes of creating a model that mitigates bias.

“Every model is going to be limited by its dataset, and every dataset is going to be limited by its sampling,” said Green. 

Creating unbiased datasets can also create overcorrections that lead to inaccuracies, Green noted.

“Ultimately, it’s a really complex problem, and it’s going to require a really complex solution.”

Brian Green, director, technology ethics, quoted by The Daily Upside.

Axios Logo
Hollywood's AI Disclosure Dilemma

"People crave authenticity," says Subramaniam Vincent, director of journalism and media at Santa Clara University's Markkula Center for Applied Ethics. He told Axios, "There's a "creeping fear" that the images and media we see every day are not real." 

Subramaniam Vincent, director, media and journalism ethics, quoted by Axios.

Scientific American Logo
A Brief History of Automatons That Were Actually People

Astra Taylor calls human labor hidden under the veneer of a robot or AI tool, "fauxtomation."

"This phenomenon is nicknamed “fauxtomation” because it “hides the human work and also falsely inflates the value of the ‘automated’ solution,” says Irina Raicu, director of the Internet Ethics program.

“This is not just a question of marketing appeal,” Raicu says. “It’s also a reflection of the current push to bring things to market before they actually work as intended or advertised. Some companies seem to view the ‘humans inside the machine’ as an interim step while the automation solution improves.”

Irina Raicu, director, internet ethics, quoted by Scientific American.



Inc. Logo
How Apple Could Help Small Businesses--and the Environment--by Making its Devices Easier to Repair

"If we are going to be creating the kind of culture where everyone is completely dependent on large tech companies to create products like this, people will be fundamentally prevented from being able to exercise their own skill, their own talent, their own abilities to work on the technology that they're using," Green says. "I would argue, ultimately, it's bad for the companies, too, because they end up harming the people that they might want to hire at some point."  

Brian Green, director, technology ethics, quoted by Inc..


TechTarget Logo
Beyond AI Doomerism: Navigating Hype vs. Reality in AI Risk

As AI becomes increasingly widespread, viewpoints featuring both sensationalism and real concern are shaping discussions about the technology and its implications for the future.

"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics

Brian Green, director, technology ethics, quoted by TechTarget.


The Daily Upside Logo
Intel Filing Could Diversify Deepfake Detection Models

Green noted, this tech includes a system that labels images by race. However, race and ethnicity aren’t always easily detectable by just looking at an image, Green noted. “It’s a simplification of human diversity that could be ethically problematic.” 

“If AI in general gets this bad name because of deep fakes or other unethical behavior, then that could perhaps cause a backlash that would go all the way back to the chip industry,” Green said.

Brian Green, director, technology ethics, quoted by The Daily Upside.


Lifewire logo
This Always-Recording AI Microphone Will Make Your Coworkers Hate You

From the same company that brought you Rewind to record everything on your computer comes the Limitless AI microphone which will record all the audio you hear and process it using AI.

"The privacy concerns raised by any non-obvious recording device might not be limitless, but they're pretty vast. In this case, the fact that there's a feature called 'consent mode' for new voices that would be recorded, but that mode (according to media reports) is off by default, is a troubling signal about respect for privacy. We already live in a world in which people distrust so much of the technology around them; in order to build trust, privacy, at least, needs to be the default in design."


Irina Raicu, director, internet ethics, quoted by Lifewire.



San Francisco Chronicle Logo
Remember the Fight Over Net Neutrality? Biden’s FCC Chair Wants to Bring it Back

Chase DiFeliciantonio, reporting for the San Francisco Chronicle, addresses net neutrality — a long-debated policy that was solidified under President Barack Obama and reversed under President Donald Trump, required Internet service providers to treat all communications on their networks the same regardless of content. The reversal of net neutrality led broadband companies to a model that provides more robust service to those willing to pay more for it.

"Industry usually prefers to have one set of rules and uniformity and opposes patchwork laws," said Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. But internet providers "opposed federal regulation and ended up with a patchwork of laws."

Irina Raicu, director, Internet Ethics, quoted by the San Francisco Chronicle, and republished by Government Technology.

  • More pages: