Skip to main content
Markkula Center for Applied Ethics

Collaborative AI Governance Arrives at the White House

The White House in Washington DC. Image by Pexels from Pixabay.

The White House in Washington DC. Image by Pexels from Pixabay.

Ann Skeet

PexelsPixabay

Ann Skeet is senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.

 

The White House announced a deal with seven technology companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—about voluntary guardrails on artificial intelligence.[i] Though criticism of the agreement was swift and far-reaching, it offers promise for future developments and an example of a high-impact ethical leadership practice.

First, let’s look at the specifics. The companies committed[ii] to:

  1. Internal and external security testing of their AI systems before their release
  2. Sharing information across the industry with governments, civil society, and academia on managing AI risks
  3. Investing in cybersecurity and insider-threat safeguards to protect proprietary and unreleased model weights
  4. Facilitating third-party discovery and reporting of vulnerabilities in their AI systems and implementing a robust reporting mechanism
  5. Developing robust technical mechanisms to ensure users know when content is AI-generated, such as a watermarking system
  6. Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use
  7. Prioritizing research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination and protecting privacy
  8. Developing and deploying advanced AI systems to help address society’s greatest challenges

Early reactions seem to welcome this first attempt at reigning in previously unchecked AI development, but express doubt at the effectiveness of the list, in large part as it arrives without any means for enforcement. Others are critical of having early discussions about long-called-for AI guardrails led by for-profit companies with competitive advantages due to their size and conflicts of interests.[iii] [iv] Some have called for including other voices who “don’t have a profit motive.”[v] Consulting stakeholders is a critical part of ethical decision making so it is prudent to include other voices, particularly those from marginalized communities.

There are also some concerns about omissions related to data in the agreement. Some felt the fact that the commitments did not include requiring companies to disclose the data used to train their AI systems was a significant oversight.[vi] Additionally, the agreement is silent about the practices related to training the data. Because developers of AI systems do not want violent and inappropriate content provided by their systems, human beings must do the work to strip such content from the data sets used, typically at considerable risk to the people viewing the material as they do the work.

Nor does the agreement address the practices around data harvesting. Concerns have been raised about the rights of those whose copyrighted data is being used without permission and a high profile lawsuit against Meta and OpenAI was filed recently by Sarah Silverman and two authors related to this. Concerns have also been raised about privacy issues, as evidenced by another lawsuit against OpenAI.[vii]

The announced safeguards do highlight three principles, according to both the White House and some of the companies’ individual statements: safety, security, and trust.[viii] [ix] The agreements are grouped by principle, with the first two being about safety, the next two about security and the final four about earning the public’s trust.[x]

In citing such principles, the agreement echoes efforts of companies around the globe to capture their ethical commitments to responsible AI through principlism, a practical means of grappling with moral questions by stating principles to be applied to real-world ethical dilemmas. The agreement shows that such principles can be translated into more practical actions that companies can take to insure they are developing technology responsibly and ethically. Getting from principles to practical applications is something we advocate for and provide guidance on in the Markkula Center’s book, “Ethics in the Age of Disruptive Technologies: An Operational Roadmap.” (Disclosure: I am one of the book’s co-authors.)

Demonstrating how to make such transitions is helpful not only to people working inside of companies, but also legislators and regulators faced with creating laws and regulations to guide the development of artificial intelligence technologies. Importantly, the agreement covers all artificial intelligence, not just the systems like ChatGPT, Claude, Bard, and others developed using large language models. It can be viewed as a starting point for legal and regulatory change.

The final commitment calls for the companies to address society’s greatest needs. It would be prudent if more public-private partnerships between industry and the public sector began, dedicating more AI resources to meeting some of humanity’s greatest challenges.

Additionally some of the specific agreements point to learning from past mistakes. We know from the Boeing Max tragedies what can happen when companies regulate the development of their own artificial intelligence, so third-party testing is a welcome requirement. It’s also wise to highlight security concerns around things like model weights, the instructions given to AI. Meta’s weights for LLAMA were leaked just days after it was released, so this is an area of legitimate concern. Committing to tighter security is a win for the administration and the companies, and the best negotiations are win-win.

Encouraging companies to share information with each other and across sectors about managing AI risks is not without its own risks of information falling into the wrong hands.[xi] Yet it renews a spirit of collaboration that was a hallmark of Silicon Valley’s early years. “’We have this sort of strange term in Silicon Valley: co-opetition,’ said Bruce Sewell, Apple’s general counsel from 2009-2017. ‘You have brutal competition, but at the same time, you have necessary cooperation.’”[xii]

It is healthy to see the re-emergence of this collaborative spirit when a critical, risky technology is coming online and it’s good to see companies building on one another’s practices and willing to share information critical to safe and secure technology development. We know from the work we do with companies that many people working with AI have been wanting to be more transparent about their processes and decisions, but have been hemmed in by corporate guidelines and liability concerns. Now that a public commitment has been made to share more information, this transparency should be easier to come by.

One of the highest impact acts of ethical leadership is stepping beyond the primary role as leader of an organization to contribute to more systemic solutions, as the leaders of these seven companies have done with this agreement. Leaders with ethical leadership competencies contribute to design principles and standards in their industry and other ecosystems.[xiii] Though just a starting point, this agreement does just that and those involved in it have demonstrated the kind of leadership needed in the realm of AI development.

Perhaps imperfect and incomplete, this agreement should nevertheless be celebrated for providing leadership that elected officials, industry leaders, and private citizens have been asking for. It does not replace the need for regulation and legislation; hopefully it helps with the development of each.

There has been a long list of questions with the arrival of AI into the public consciousness. Just because leaders don’t have all the answers, doesn’t mean they shouldn’t get to work and do so in an integrated, systemic, and systematic way. If companies can collaborate with one another on AI and partner with the government, imagine the positive impact they can have on human health, society, and the earth.

 

 


[i] Siddiqui, Sabrina and Seetharaman, Deepa, “White House Says Amazon, Google, Meta, Microsoft Agree to AI Safeguards,” The Wall Street Journal, July 21, 2023.

[ii] The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023.

[iii] Roose, Kevin, “How Do the White House’s A.I. Commitments Stack Up?” The New York Times, July 22, 2023.

[iv]The White House and big tech companies release commitments on managing AI,” heard on NPR’s Morning Edition, July 21, 2023. 

[v]The White House and big tech companies release commitments on managing AI,” heard on NPR’s Morning Edition, July 21, 2023. 

[vi] Clark, Adam, “White House AI Deal: What Big Tech Pledged—and the Biggest Omissions,” Barron’s, July 21, 2023.

[vii] Goldman, Sharon, “What Sarah Silverman’s lawsuit against OpenAI and Meta really means,” VentureBeat, July 10, 2023.

[viii] Chatterjee, Mohar, “White House notches AI agreement with top tech firms,” Politico, July 21, 2023.

[ix] Siddiqui, Sabrina and Seetharaman, Deepa, “White House Says Amazon, Google, Meta, Microsoft Agree to AI Safeguards,” The Wall Street Journal, July 21, 2023.

[x] The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023.

[xi] Roose, Kevin, “How Do the White House’s A.I. Commitments Stack Up?” The New York Times, July 22, 2023.

[xii] Wakabayashi, Daisuke and Nicas, Jack, “Apply, Google and a Deal That Controls the Internet,” October 25, 2020.

[xiii] Skeet, Ann, “The Practice of Ethical Leadership,” Markkula Center for Applied Ethics, April 12, 2017.  

Jul 27, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: