Former OpenAI execs call out the company's lack of transparency

A wave of increasingly energetic cries for regulation have been thrown at OpenAI.
By Chase DiBenedetto  on 
Sam Altman looks behind him with a startled expression, walking through the U.S. Capitol building.
OpenAI needs more intense regulation, say former board members and Altman ousters. Credit: Kent Nishimura / Getty Images

Former OpenAI board members are calling for greater government regulation of the company as CEO Sam Altman's leadership comes under fire.

Helen Toner and Tasha McCauley — two of several former employees who made up the cast of characters that ousted Altman in November — say their decision to push the leader out and "salvage" OpenAI's regulatory structure was spurred by "long-standing patterns of behavior exhibited by Mr Altman," which "undermined the board’s oversight of key decisions and internal safety protocols."

Writing in an Op-Ed published by The Economist on May 26, Toner and McCauley allege that Altman's pattern of behavior, combined with a reliance on self-governance, is a recipe for AGI disaster.

While the two say they joined the company "cautiously optimistic" about the future of OpenAI, bolstered by the seemingly altruistic motivations of the at-the-time exclusively nonprofit company, the two have since questioned the actions of Altman and the company. "Multiple senior leaders had privately shared grave concerns with the board," they write, "saying they believed that Mr Altman cultivated a 'toxic culture of lying' and engaged in 'behavior [that] can be characterized as psychological abuse.'"

"Developments since he returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance," they continue. "Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role."

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

In hindsight, Toner and McCauley write, "If any company could have successfully governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI."

The former board members argue in opposition to the current push for self-reporting and fairly minimal external regulation of AI companies as federal laws stall. Abroad, AI task forces are already finding flaws in relying on tech giants to spearhead safety efforts. Last week, the EU issued a billion-dollar warning to Microsoft after they failed to disclose potential risks of their AI-powered CoPilot and Image Creator. A recent UK AI Safety Institute report found that the safeguards of several of the biggest public Large Language Models (LLMs) were easily jailbroken by malicious prompts.

In recent weeks, OpenAI has been at the center of the AI regulation conversation following a series of high-profile resignations by high-ranking employees who cited differing views on its future. After co-founder and head of its superalignment team, Ilya Sutskever, and his co-leader Jan Leike left the company, OpenAI disbanded its in-house safety team.

Leike said that he was concerned about OpenAI's future, as "safety culture and processes have taken a backseat to shiny products."

Altman came under fire for a then-revealed company off-boarding policy that forces departing employees to sign NDAs restricting them from saying anything negative about OpenAI or risk losing any equity they have in the business.

Shortly after, Altman and president and co-founder Greg Brockman responded to the controversy, writing on X: "The future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model...We are also continuing to collaborate with governments and many stakeholders on safety. There's no proven playbook for how to navigate the path to AGI."

In the eyes of many of OpenAI's former employees, the historically "light-touch" philosophy of internet regulation isn't going to cut it.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You
OpenAI whistleblowers call on SEC to investigate the AI company
OpenAI logo on smartphone

Former OpenAI exec that quit for ‘safety concerns’ joins rival company
OpenAI logo displayed on a phone screen in this illustration

OpenAI reveals its ChatGPT AI voice assistant
OpenAI Spring update livestream


OpenAI CEO Sam Altman was fired for 'outright lying,' says former board member
A composite of OpenAI CEO Sam Altman and former OpenAI board member Helen Toner.

More in Tech
OpenAI rolls out ChatGPT's new Voice AI (without Scarlett Johansson mode)
ChatGPT on App Store on an iPhone.

Best Apple deals July 2024: MacBook Airs are $200 off
person holding m2 macbook air in front of piano

I tested Apple Intelligence on my iPhone 15 Pro Max: 3 ways it spoiled me rotten
Apple Intelligence option on the iPhone 15 Pro Max

Scammers are using Meta's copyright takedown tool against influencers
Facebook and Instagram app logos

This new tool can tell you whether AI has stolen your work
Graphic depicting artificial intelligence

Trending on Mashable

NYT Connections today: See hints and answers for July 31
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer hints for July 31
a phone displaying Wordle


NYT Strands hints, answers for July 31
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!