OpenAI chief Sam Altman in Washington earlier this year
Sam Altman in Washington earlier this year. He is now back as chief of OpenAI, but controlling him may be harder than ever © EPA-EFE

This article is an on-site version of The Lex Newsletter. Sign up here to get the complete newsletter sent straight to your inbox every Wednesday and Friday

Dear reader, 

It is incredible that we still do not know why the board of OpenAI sacked its founder and chief executive, Sam Altman. His rapid return has undermined the legitimacy of that decision. But it is a question that still deserves an answer — if only to reassure the public about the safety of artificial intelligence research. 

The way in which OpenAI’s board handled Altman’s departure was spectacular and disastrous. When news broke, it would only say that the chief had not been consistently candid in his communications. This was a thin explanation for forcing out a leader who had been integral to making OpenAI into a household name. The US start-up has secured $13bn in funding from Microsoft, taking it to the brink of an $86bn valuation. 

The information vacuum triggered a rush of conspiracy theories. Without a detailed, official explanation, rumours were rife on social media, some of them doubtless spread by traders in related tech stocks.

Some of the gossip died down when an internal memo clarified that the exit had not been connected to “malfeasance or anything related to our financial, business, safety, or security/privacy practices”. But then, what merited Altman’s firing? And why did co-founder Ilya Sutskever, part of the group that led the coup, declare remorse so quickly?

Column chart of desktop visits to Openai.com showing OpenAI traffic is rising once again

So far, media including the Financial Times has reported that disagreements between Altman and the board were linked to two issues. First, Altman’s conversations with the likes of SoftBank’s Masayoshi Son and investors in the Middle East about side projects, including a possible chip company. Second, the speed with which OpenAI is commercialising AI. Reports of a breakthrough in research — the mysterious Q* model — may also have caused concern. 

As a US non-profit organisation, OpenAI is under no obligation to tell us what happened. It must file tax returns but little else. Even when a chief executive leaves a public company, there is a limit to what information is required. The Securities and Exchange Commission demands that details such as dates are disclosed on a Form 8-K. But the reasons for an exit are not required. Only a fraction are recorded as having been terminated “for cause”. 

Frustratingly, we may never know what exactly prompted OpenAI’s board to make its decision. But we can take a guess at what happens next. 

The previous board in theory answered to everyone, not just investors. It has now been dismantled. The new board is unlikely to echo the high-minded views of its predecessor. In the tug of war between commercialisation and research, commercialisation has won. There will be more product ideas to come. The pursuit of humanlike intelligence known as artificial general intelligence may slow. Plans to launch a tender offer to buy employee shares — hopefully at an $86bn valuation — will be a priority.  

There are reasons to regret this. The first board’s role was to ensure that OpenAI pursued AI research that was “safe and benefits all of humanity”. The idea of a board that has an eye on the future and not on the immediate desires of investors is no bad thing, particularly if it has real sway.

Meta created an independent oversight board in 2020 to review decisions on content moderation. This might stand as a useful precedent — if the board had real power. Founder and chief Mark Zuckerberg says its decisions are binding. But there is no way to enforce that. With special shares that confer voting rights, no one has more say over Meta than Zuckerberg.

Plainly, Altman failed to maintain good relations with OpenAI’s board. Before his exit, three directors had already stepped down this year — LinkedIn co-founder Reid Hoffman, Neuralink executive Shivon Zilis and politician Will Hurd. The remaining board was small and lacked corporate experience.

Its enthusiasm for the dubious concept of “effective altruism” — which blends philanthropy with utilitarianism — added to the appearance of weakness. EA’s most famous proponent, Sam Bankman-Fried, was recently found guilty of fraud following the collapse of his company FTX.

Now OpenAI has a new “initial” board that includes former Salesforce co-chief Bret Taylor, Larry Summers and previous board member Adam D’Angelo.

Altman is back as chief. Controlling him may be harder than ever, particularly given the strength of employee loyalty that led to his return.

Microsoft’s role in bringing Altman back, meanwhile, gives it more say over the direction of the company.

He has agreed to an internal investigation into whatever it was that led the first board to oust him. The results of that could help the new board keep a tighter rein on him.

There is a good case for publishing the conclusions, if only in outline. OpenAI is a private company. But the revolutionary nature of its products makes it a public concern.

Other stuff I’ve been reading

Generative AI can’t beat human coders yet but this New Yorker essay suggests we may be living in the waning days of coding.  

Enjoy the rest of your week,

Elaine Moore
Deputy head of Lex 

If you would like to receive regular Lex updates, do add us to your FT Digest, and you will get an instant email alert every time we publish. You can also see every Lex column via the webpage

Recommended newsletters for you

Due Diligence — Top stories from the world of corporate finance. Sign up here

Free Lunch — Your guide to the global economic policy debate. Sign up here

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments