**Inside the Musk v. Altman Lawsuit: Five Revelations from Ilya Sutskever’s Testimony**

On October 1, 2024, Ilya Sutskever, the co-founder of OpenAI and one of the key architects behind ChatGPT, sat for nearly 10 hours of videotaped testimony in the high-profile Musk v. Altman lawsuit. Known both for his technical brilliance and for his controversial vote to fire former CEO Sam Altman in November 2023, Sutskever was finally under oath, compelled to answer tough questions.

This week, a 365-page transcript of his testimony was released, painting a vivid portrait of brilliant scientists grappling with catastrophic governance failures, unverified allegations treated as facts, and ideological divides so profound that some board members preferred destroying OpenAI rather than letting it continue under Altman’s leadership.

The lawsuit itself centers around Elon Musk’s claim that OpenAI and its CEO, Sam Altman, betrayed the company’s original nonprofit mission by transforming its research into a for-profit venture aligned with Microsoft. This dispute raises critical questions about who controls advanced AI models and whether they can be developed safely in the public interest.

For those following the unfolding drama at OpenAI, Sutskever’s testimony is both eye-opening and damning. It’s a case study in how technical genius met organizational incompetence—and how that collision nearly doomed one of the world’s most impactful AI companies.

Here are the five most significant revelations from the testimony:

### 1. The 52-Page Dossier the Public Hasn’t Seen

Sutskever authored an extensive 52-page memo arguing for Sam Altman’s removal. The document included detailed screenshots and a forceful critique, famously stating:

> “Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another.”

He sent the memo to the independent board members using disappearing email technology, concerned it might leak publicly.

In his testimony, Sutskever explained:

> “The independent board members asked me to prepare it. And I did. And I was pretty careful.”

Parts of this memo exist through screenshots captured by OpenAI’s CTO, Mira Murati.

### 2. A Year-Long Game of Boardroom Chess

When asked how long he had been considering firing Altman, Sutskever revealed it was:

> “At least a year.”

He explained that the key factor was waiting for a majority of the board to no longer be “obviously friendly” toward Altman.

Sutskever understood the power dynamics well—a CEO who controls board composition is practically untouchable. Thus, he patiently waited for board member turnover to create an opportunity to move against Altman.

Despite their public façade of closeness, Sutskever was playing a long game of strategic board politics behind the scenes.

### 3. The Weekend OpenAI Almost Disappeared

Within 48 hours of Altman’s firing, on Saturday, November 18, 2023, there were active discussions about merging OpenAI with Anthropic, another AI company.

According to Sutskever, former OpenAI board member Helen Toner was “the most supportive” of this potential merger. He testified:

> “I don’t know whether it was Helen who reached out to Anthropic or whether Anthropic reached out to Helen. But they reached out with a proposal to be merged with OpenAI and take over its leadership.”

Sutskever made clear he was “very unhappy” about this and “really did not want OpenAI to merge with Anthropic.”

Had the merger gone through, OpenAI would have ceased to exist as an independent entity.

### 4. “Destroying OpenAI Could Be Consistent with the Mission”

A profound ideological divide lay at the heart of this crisis.

During a meeting where executives warned the board that OpenAI would collapse without Altman, Helen Toner responded that destroying OpenAI could actually be “consistent” with its safety mission.

Sutskever explained:

> “The executives told the board that, if Sam does not return, then OpenAI will be destroyed, and that’s inconsistent with OpenAI’s mission. And Helen Toner said something to the effect that it is consistent, but I think she said it even more directly than that.”

This perspective reflects a strand of AI safety thinking that views rapid AI development as existentially dangerous—potentially more dangerous than not developing AI at all.

Such a belief helps explain why the board stood firm in its decision despite over 700 employees threatening to leave in protest.

### 5. Miscalculations: Overreliance on One Source, an Inexperienced Board, and Unexpected Employee Loyalty

Almost all the complaints in Sutskever’s 52-page memo originated from one person: CTO Mira Murati.

He admitted he never verified the claims with other executives mentioned, such as Brad Lightcap or Greg Brockman.

Sutskever testified:

> “I fully believed the information that Mira was giving me. In hindsight, I realize that I didn’t know it. But back then, I thought I knew it. But I knew it through secondhand knowledge.”

Regarding the board’s decision-making process, Sutskever was candid:

> “One thing I can say is that the process was rushed. I think it was rushed because the board was inexperienced.”

He also misjudged the employees’ reaction to Altman’s firing. When he learned that over 700 of 770 employees signed a letter demanding Altman’s return and threatening to leave for Microsoft, he was genuinely surprised.

> “I had not expected them to cheer, but I had not expected them to feel strongly either way,” he said.

This miscalculation revealed a deep isolation of the board from the company’s organizational reality.

**Conclusion**

The Musk v. Altman lawsuit, and particularly Sutskever’s extensive testimony, reveals how brilliant minds can falter amid governance failures, ideological rifts, and rushed judgment calls. It underscores the complexities and high stakes of managing and governing organizations at the forefront of transformative technologies like AI.

As the battle over OpenAI’s future continues, these revelations shed light on the challenges faced by those entrusted with steering such a powerful entity responsibly—and the costly mistakes that can ensue when internal dynamics fracture and trust erodes.
https://bitcoinethereumnews.com/tech/inside-the-deposition-that-showed-how-openai-nearly-destroyed-itself/

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *