The high-stakes legal battle between Elon Musk and the leadership of OpenAI and Microsoft reached a critical juncture on Monday as the trial entered its final phase in a federal courtroom. The proceedings were marked by rare and revealing testimony from three of the most influential figures in the technology industry: Microsoft CEO Satya Nadella, former OpenAI Chief Scientist Ilya Sutskever, and current OpenAI Chairman Bret Taylor. The day’s testimony provided an unprecedented look into the internal power struggles, financial motivations, and governance failures that have defined the world’s leading artificial intelligence laboratory since its inception. Central to the dispute is Musk’s allegation that OpenAI abandoned its original mission as a non-profit dedicated to developing artificial general intelligence (AGI) for the benefit of humanity, instead becoming a "de facto subsidiary" of Microsoft focused on maximizing profit.
The Financial Architecture and Individual Wealth
One of the most striking revelations of the day concerned the immense personal wealth tied to OpenAI’s transition from a pure non-profit to a "capped-profit" entity. Ilya Sutskever, a co-founder and the former chief scientist who was instrumental in the company’s early technical breakthroughs, revealed under oath that he holds an ownership stake in OpenAI’s for-profit arm currently valued at approximately $7 billion. This disclosure places Sutskever among the largest individual shareholders in the organization, which is currently appraised at a staggering $850 billion in its for-profit structure.
This follows earlier testimony from OpenAI President Greg Brockman, who acknowledged for the first time that his own shares are worth an estimated $30 billion. The scale of these holdings underscores the dramatic shift in the organization’s financial profile. When OpenAI was founded in 2015, it was positioned as a non-profit research lab intended to serve as a counterweight to the commercial interests of companies like Google. Sutskever himself famously turned down a $6 million annual compensation package from Google to join OpenAI at its start, a move that Brockman previously described as being "joined at the hip" in a shared mission. However, the subsequent creation of a for-profit subsidiary in 2019 to attract the massive capital required for compute power has clearly resulted in generational wealth for the early founders, a point Musk’s legal team has used to argue that financial incentives have supplanted the original altruistic mission.
The "Amateur City" Ouster: A Chronology of Conflict
Much of the day’s questioning focused on the chaotic events of November 2023, when the OpenAI board of directors, led in part by Sutskever, abruptly fired CEO Sam Altman. The move sent shockwaves through the tech industry and nearly led to the collapse of the company as hundreds of employees threatened to resign.
Satya Nadella, whose company has invested over $13 billion into OpenAI, did not mince words when describing his reaction to the board’s decision. He characterized the firing and the board’s subsequent failure to provide a clear explanation as "amateur city." Nadella testified that he "never got clarity" regarding the "lack of candor" the board cited as the reason for Altman’s removal. Internal documents presented in court revealed that during the crisis, Nadella and his lieutenants at Microsoft were actively vetting 14 potential candidates to join a new OpenAI board if Altman were to return. Nadella admitted that Microsoft effectively vetoed at least two candidates and suggested others, highlighting the level of influence the software giant exerts over the supposedly independent AI lab.
Sutskever, appearing in court without a suit jacket—a notable departure from the formal attire of other witnesses—offered a more somber perspective. He testified that he supported the initial firing because he believed an "environment where executives don’t have the correct information" is not "conducive to reach any grand goal." However, he expressed regret over the execution of the ouster, criticizing the board for rushing the process and relying on "legal advice that wasn’t very good."
The emotional toll on Sutskever was evident. Since Altman’s reinstatement, Sutskever has been estranged from both Altman and Brockman. He eventually left OpenAI in 2024 to start a competing lab, Safe Superintelligence Inc. "I felt a great deal of ownership of OpenAI," Sutskever told the court. "I felt like I put my life into it, and I simply cared for it, and I didn’t want it to be destroyed."
Microsoft’s Strategy: Control and Destiny
The trial has brought to light the evolving nature of the partnership between OpenAI and Microsoft. Elon Musk’s legal team alleges that Microsoft pressured OpenAI to pivot toward commercialization to justify its massive investments. Testimony from Nadella provided a nuanced view of this transition. Initially, Microsoft supported OpenAI through heavily discounted cloud computing services on its Azure platform. However, Nadella testified that this model became unsustainable "once the bill started going up."
Internal emails from 2022 showed Nadella expressing alarm at the rising costs, exclaiming that Microsoft would "lose 4 bil next year!!!" due to the partnership’s requirements. This financial pressure led to a restructured agreement where Microsoft’s investment would be exchanged for a share of future profits. Nadella wrote to his executives, "If we are going to spend this kind of money and not have control of destiny, it makes no sense."
Musk’s attorneys presented text messages from early 2023 showing Nadella pushing Altman to launch paid subscriptions for ChatGPT, telling him "sooner is best." Just weeks later, Nadella was checking in on the number of signups. This evidence is central to Musk’s claim that OpenAI’s priorities shifted from safety and openness to revenue generation under Microsoft’s influence. Despite the friction, the partnership has proven lucrative; as of March 2025, Microsoft has generated $9.5 billion in sales through its OpenAI-related offerings.
The Scaling Argument: Ants vs. Cats
In a pivotal moment for the defense, Sutskever’s testimony actually bolstered some of OpenAI’s core arguments against Musk. Musk’s lawsuit hinges on the idea that Altman and Brockman breached a "founding agreement" by pursuing a for-profit model. However, Sutskever testified that Musk never negotiated any specific, binding promises regarding the non-profit status when he provided initial funding.
Sutskever explained the technical necessity of the for-profit pivot using a vivid analogy. He told US District Judge Yvonne Gonzalez Rogers that the difference in capability between earlier AI models and the latest iterations was like "the difference between an ant and a cat." To bridge that gap, OpenAI needed "a lot of dollars" to build computing infrastructure on the scale of the human brain. "If there’s no funding, there is no big computer," Sutskever remarked, suggesting that the consensus among the leadership—including, at one point, the realization that donations would not suffice—was that a for-profit arm was the only viable path forward.
Governance and Safety: The Disbanded Superalignment Team
A significant point of contention in the trial is the current state of AI safety research at OpenAI. Sutskever had previously led the "Superalignment" team, which was tasked with ensuring that future, ultra-intelligent AI systems remain aligned with human values. He testified that this was the most important work at the company "for the long term."
However, the Superalignment team was disbanded in May 2024, shortly after Sutskever’s departure. Musk’s legal team argues that the dissolution of this team is evidence that OpenAI has deprioritized safety in favor of rapid product deployment. Sutskever’s testimony confirmed that his primary motivation during the November 2023 board crisis was a concern for the long-term safety and integrity of the project, though he admitted that the execution of the board’s intervention was flawed.
Official Responses and Broader Implications
The day concluded with testimony from OpenAI Chairman Bret Taylor, who offered a staunch defense of Sam Altman’s leadership. Taylor addressed concerns regarding Altman’s personal investments, specifically a 2024 content and technology deal with Reddit, a company in which Altman holds a significant stake. Taylor testified that Altman recused himself from the approval process but stepped in to "bring down the temperature" when negotiations threatened to turn into a lawsuit. Taylor praised Altman as "forthright" and stated that he has "grown OpenAI in ways that have exceeded my expectations."
The implications of this trial extend far beyond the immediate financial stakes. The verdict could set a legal precedent for how non-profit organizations transition to for-profit entities, especially in the burgeoning field of "frontier" technology. It also raises fundamental questions about the governance of AGI—technology that many believe could eventually surpass human intelligence. If the court finds that OpenAI breached a foundational contract with Musk, it could lead to a restructuring of the organization or a redistribution of its assets. Conversely, a victory for OpenAI would validate the "capped-profit" model as a legitimate way to fund capital-intensive scientific breakthroughs.
As the trial moves into its final days, the industry awaits the testimony of Sam Altman himself, who is scheduled to take the stand on Tuesday. His testimony will likely be the final piece of a complex puzzle involving broken friendships, billions of dollars, and the existential question of who should control the future of artificial intelligence. For now, the evidence presented on Monday suggests a company that outgrew its original skin, driven by the massive computational requirements of modern AI and the immense gravitational pull of Silicon Valley’s capital markets.
