Skip to main content
Markkula Center for Applied Ethics

OpenAI’s Board Might Have Been Dysfunctional–but They Made the Right Choice

Open AI and Microsoft logos

Open AI and Microsoft logos

Ann Skeet

Rokas - stock.adobe.com

Ann Skeet (@leaderethics) is the senior director of leadership ethics at the Markkula Center for Applied Ethics, and co-author of the Center’sInstitute for Technology, Ethics and Culture (ITEC) handbook, Ethics in the Age of Disruptive Technologies: An Operational Roadmap. Views are her own. 

This article originally appeared on Fortune.com "OpenAI’s Board Might Have Been Dysfunctional–but They Made the Right Choice. Their Defeat Shows That in the Battle Between AI Profits and Ethics, it’s no Contest."

 

The drama around OpenAI, its board, and Sam Altman has been a fascinating story that raises a number of ethical leadership issues. What are the responsibilities that OpenAI’s board, Sam Altman, and Microsoft held during these quickly moving events? Whose interests should have held priority during this saga and why? 

Let’s start with the board. We still don’t know in what way Altman was not straightforward with his board. We do know nonprofit boards–and OpenAI is, by design, governed by a nonprofit board–have a special duty to ensure the organization is meeting its mission. If they feel the CEO is not fulfilling that mission, they have cause to act.

According to OpenAI’s website, its mission is “to ensure that artificial general intelligence benefits all of humanity.” That’s a tall order–and words are important. The distinction of artificial general intelligence from artificial intelligence may be part of the story if the company was close to meeting its own definition of artificial general intelligence and felt it was about to do so in a way that did not benefit humanity. In an interview with the podcast Hard Fork days before he was fired, when asked to define artificial general intelligence, Altman said it’s a “ridiculous and meaningless term,” and redefined it as “really smart AI.” Perhaps his board felt the term and its definition was more important.

One issue may be that OpenAI’s mission statement reads more like a vision statement which can be more aspirational and forward-looking than a corporation’s mission statement, which usually captures the organization’s purpose. The real issue here, however, is not whether it is a vision or mission statement: The ethical issue is that the board is obligated to take actions that ensure it is fulfilled. Moving slowly and not accelerating AI progress may not be a compelling pitch to investors–but perhaps there are investors who want to invest in precisely that. If a cautious approach is what OpenAI’s mission implies, then it’s a worthy goal to pursue, even if it goes against the traditional approach of a more typically structured startup.

The board also has a duty to actively participate in oversight of the organization’s activities and manage its assets prudently. Nonprofit boards hold their institutions in trust for the community they serve (in this case, all of humanity). OpenAI’s website also declares it to be a research and deployment company. Neither of those things is possible if most of the staff quits the organization or if funding for the organization is not adequate.

We also know more now about the board’s dysfunction, including the fact that the tension has existed for much of the past year and that a disagreement broke out over a paper a board member wrote that seemed critical of the company’s approach to AI safety and complimentary of a competitor. While the board member defended her paper as an act of academic freedom, writing papers about the company while sitting on its board can be considered a conflict of interest as it violates the duty of loyalty. If she felt strongly about writing the paper, that was the moment to resign from the board. 

As the sitting CEO of OpenAI, the interests Altman needed to keep front and center were those of OpenAI. Given what’s been reported about the additional business interests he was pursuing in the form of starting two other companies, there is some evidence he did not make OpenAI his absolute priority. Whether this is at the heart of the communication issues he had with the board remains to be seen, but it’s enough to know he was on the road working to get these organizations started.

Even by his own admission, Altman did not stay close enough to his own board to prevent the organizational meltdown that has now occurred on his watch. This is an unfortunate byproduct, perhaps, of choices made by other CEOs Altman knows and may be emulating. Elon Musk, an early investor and board member at OpenAI, believes he can shepherd the interests of Tesla, SpaceX and its Starlink network, The Boring Company, and X all at the same time. Yet each company is deserving of the singular focus of a CEO who clearly sets as priority the interests of that particular company. 

Or perhaps Altman, like many extremely successful startup CEOs, is a “start-something-new” guy rather than a “maintain-it-once-it’s-built” executive. Perhaps starting new things is what he is best called to do. There is a way to do that without the conflicted interests that come automatically when one is managing more than one company at a time or running a for-profit business as part of a nonprofit. This would also not be the first time Altman left an organization because he was distracted by other opportunities. Ironically, he was asked to leave Y Combinator a few years ago because he was busy with other business endeavors, including OpenAI.

Altman seemed to understand his responsibility to run a viable, enduring organization and keep its employees happy. He was on his way to pulling off a tender offer–a secondary round of investment in AI that would give the company much-needed cash and provide employees with the opportunity to cash out their shares. He also seemed very comfortable engaging in industry-wide issues like regulation and standards. Finding a balance between those activities is part of the work of corporate leaders and perhaps the board felt that Altman failed to find such a balance in the months leading up to his firing.

Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s! By hiring Sam Altman and Greg Brockman (a co-founder and president of OpenAI who resigned from OpenAI in solidarity with Altman), offering to hire more OpenAI staff, and still planning to collaborate with OpenAI, Satya Nadella hedged his bets. He seems to understand that by harnessing both the technological promise of AI, as articulated by OpenAI, and the talent to fulfill that promise, he is protecting Microsoft’s interest, a perspective reinforced by the financial markets’ positive response to his decision to offer Altman a job and further reinforced by his own willingness to support Altman’s return to OpenAI. Nadella acted with the interests of his company and its future at the forefront of his decision-making and he appears to have covered all the bases amidst a rapidly unfolding set of circumstances. 

OpenAI employees may not like the board’s dramatic retort that allowing the company to be destroyed would be consistent with the mission–but those board members saw it that way.

The board of OpenAI was created, by design, to remove profit interests from the equation. At the end of the day, OpenAI’s employees may be more profit-oriented.

 

Nov 27, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: