Skip to main content
Markkula Center for Applied Ethics

How to Responsibly and Reasonably Regulate AI

purple and green horizontal lines and black and white OpenAI logo.

purple and green horizontal lines and black and white OpenAI logo.

Ann Skeet

Richard Drew/Associated Press

Ann Skeet (@leaderethics) is the senior director of leadership ethics at the Markkula Center for Applied Ethics. Views are her own. This article was first published in The Messenger and is republished with permission.

 

Taken together, the recent artificial intelligence (AI) executive order issued by President Joe Biden and the new U.S. Initiatives announced just days later by Vice President Kamala Harris to “advance the safe and responsible use of artificial intelligence” position the United States as a leader in AI governance — when in reality, the country is playing catch up. That said, the executive order and the associated initiatives are so comprehensive they have the potential to be the most far-reaching regulation developed globally.

Nationally, the moves demonstrate how well Biden understands how government works and also suggest that he has been able to engage an effective set of experts in developing this multi-prong approach to AI regulation. By engaging a set of cross-sector stakeholders, both in the planning and execution of his goals, the president is planning for the worst and perhaps hoping for the best when it comes to outcomes from early AI system investment and development. His integrative, inclusive and collaborative approach, including actions taken under the Defense Protection Act, finds a balance between short-term, real-world problems, as well as long-term, real-world concerns, while still using a variety of levers to encourage innovation and support the United States’ technological development.

The order and initiatives distribute responsibility to a number of cabinet-level departments and other existing agencies, like the Federal Trade Commission (FTC), but also create new oversight institutes, boards and other tools intended to spur research and innovation. This mix indicates that while the president is aware of the need to more tightly monitor the development of AI systems, he also appreciates the need to continuously innovate in AI to protect the country’s economic leadership and protect national security interests. 

By discharging existing government departments and agencies to oversee AI and developing new tools and organizations to aid in that effort, the administration is addressing the question of whether AI regulation will be distributed among a number of entities, or concentrated in a single, new agency. The answer, apparently, is both.  

The president’s executive order draws on the strength of existing departments by distributing responsibility to a number of cabinet-level departments like Homeland Security, Energy, and Commerce, as well as other agencies. With a nod toward promoting innovation and continued research, the executive order provides funding to advance AI breakthroughs, which provides AI researchers and students access to AI resources as well as data, while expanding grants for AI research.  A new organization, the United States AI Safety Institute (US AISI), housed within the National Institute of Standards and Technology (NIST), was announced by Harris as part of the United States AI initiatives at the UK’s Global Summit on AI Safety. 

The creation of the US AISI concentrates more responsibility in NIST, under the umbrella of the Department of Commerce, and gives more weight to its AI Risk Management Framework, announced in January 2023. In discussing the department’s future work, Secretary of Commerce Gina Raimondo spoke about the fact that government will need the assistance of the private sector and academia to meet the country’s goals for safe and secure AI. This approach combines the best of centralized government work, with its responsibilities to the common good, including national security, while also taking advantage of the decentralized resources in academia and industry, with their emphasis on research and innovation. 

Completing the cross-sector collaboration picture is the inclusion of philanthropic support to “advance AI that is designed and used in the best interests of workers, consumers, communities, and historically marginalized people in the United States and across the globe.” Ten foundations committed to more than $200 million in funding toward these ends. This funding network identified five pillars: ensuring AI protects democracy and rights, driving AI innovation in the public interest, empowering workers to thrive amid AI-driven changes, improving transparency and accountability of AI, as well as supporting international rules and norms on AI. The supporting foundations are now directly linked to this new technology and engaged in harnessing its impact for positive outcomes.  

The inclusion of this philanthropic network can help to address two of the concerns raised by those who have been watching the U.S. regulatory picture unfold. First, there has been criticism that civil society voices are not in the conversation about the future of AI. Second, some have felt the AI safety agenda has trumped the AI fairness agenda — those concerns about the potential biases built into the use of AI that will further discrimination. These philanthropic efforts are aimed squarely at these concerns.

This administration will also lead by example, including in these announcements draft policy guidance on the government’s own use of AI, now open for public comment. And the United States is now leading globally: 31 nations have joined the U.S. in its Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy

These moves, announced in a matter of days, were obviously many months in development. They will be enacted in the months to come, most with an aggressive, perhaps unachievable rollout timeline between six to 12 months. They provide leadership to move the country beyond the binary argument between the so-called techno-optimists, interested only in the rapid acceleration of a new technology in spite of its inherent risks, and those who caution more prudence and a slower pace of AI implementation to consider and mitigate those risks. This regulation suggests that there is a middle ground where innovation can occur, but reasonable risks can be addressed. It also suggests that AI regulation will be a full participation sport — actors from all sectors and from all levels within organizations are needed to responsibly and reasonably regulate AI. 

There has been some tension between those who feel regulators should focus on the existential threats to humanity some fear that AI poses, while others are more focused on matters already in the public consciousness and reality.  This administration is choosing to act on the concerns about AI that people are expressing and experiencing now. In fact, 58% of U.S. adults polled think that AI tools will increase the spread of false and misleading information in the coming year’s elections. We are seeing already the damage AI can cause to young people, through harmful uses such as fake nudes, echoes of the harms caused by social media.

At the AI Safety Summit hosted by the UK, 28 nations, including the United States, signed The Bletchley Declaration, which warned of the potential harms of AI and called for international cooperation to ensure it’s safe deployment. In this way, the administration is also acknowledging and planning for some of AI’s worst-case scenarios. 

Clearly, this administration has decided it is time to meet such a comprehensive, game-changing technology with comprehensive, game-changing regulation. After much hand-wringing about the lack of American leadership in AI, these actions should be welcomed for the balance they find in addressing short and long-term concerns and between safety and innovation, and for the breadth of stakeholders engaged in their development and execution. Indeed, these moves have been welcomed and there has been relatively little pushback from industry following their announcement, few claims of overreach. If anything, the early criticism of Biden’s order was that it did not go far enough, but within days, more initiatives that were introduced by Harris to blunt some of those reactions.

We now have regulation of the people, by the people, and for the people. It is now up to corporations and Congress to take the strong cues provided by these guidelines and requirements. Congress should act to secure Americans' privacy, and companies should act with a more complete understanding of their obligations to develop AI that is safe and fair, now and in the future.

 

Nov 20, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: