In a dramatic shakeup at one of the world’s most prominent artificial intelligence labs, OpenAI’s founders and employees found themselves at odds last November over the leadership of CEO Sam Altman. The ensuing power struggle ultimately led to Altman’s firing by the company’s board of directors, only for him to be swiftly reinstated days later amid outcry from staff and key investors.
The surprising sequence of events has created turmoil and doubts about oversight at the ambitious startup. While the official reasons for Altman’s termination remain opaque, a former board member has come forward with disturbing allegations of misconduct painting a picture of an executive who sidelined accountability.
“Sam constantly was claiming to be an independent board member with no financial interest in the company, but he didn’t inform us that he actually owned the OpenAI startup fund,” charged AI researcher Helen Toner in a recent interview. “On multiple occasions, he outright lied about the company’s safety processes.”
Toner depicted a co-founder who ruled OpenAI through fear and intimidation. She accused Altman of waging a campaign to remove her from the board after she published research he disliked. More broadly, Toner claimed there was a “toxic atmosphere” where executives felt unable to properly raise issues with Altman’s leadership.
“They used the phrase ‘psychological abuse,’ telling us they didn’t think he was the right person to lead the company,” Toner said of conversations with other OpenAI leaders. “They had no belief that he either could or would change.”
The apparent final straw was the surprise release of ChatGPT, OpenAI’s revolutionary conversational AI that instantly went viral. Toner says the board was completely blindsided, learning of the launch on Twitter rather than being looped in by Altman and his team.
In moving to fire Altman, the board allegedly took great pains to keep the decision under wraps until the last moment. “It was very clear that as soon as Sam had any inkling, he would pull out all the stops to undermine us,” Toner explained their strategizing.
And undermine them he did. Within days of his ouster, OpenAI was flooded with vocal support for Altman from employees, partners like Microsoft, and the tech community at large. Over 500 staffers reportedly threatened to resign if he wasn’t reinstated as CEO under his own chosen board.
Toner believes many were compelled to back Altman by a misleading narrative “that if Sam did not return immediately with no accountability, OpenAI would be destroyed.” She also suggested lingering fears of reprisal from the founder’s alleged retaliation against critics played a role.
The dizzying series of events has sparked concerns about whether the ideals of open science and ethical AI can be upheld at a founder-led outfit like OpenAI awash in capital and outsize ambition. Toner pointed to Altman’s prior controversies and difficulties at companies like Y Combinator as evidence of a long pattern.
In a statement, OpenAI defended its reinstated leadership, reasserting its core mission and claiming an independent review had uncovered no wrongdoing related to key issues like safety practices. But for observers scrutinizing the rise of artificial general intelligence, last year’s OpenAI meltdown highlighted the fraught power dynamics and murky governance that could have society-spanning implications.