Musk claims the safety mantle in OpenAI courtroom battle

Musk claims the safety mantle in OpenAI courtroom battle

Elon Musk took the stand this week as the opening witness in his lawsuit against OpenAI, Sam Altman, Greg Brockman, and Microsoft. His central argument: he is the true believer in AI safety, while the company he co-founded has abandoned those principles in pursuit of profit.

Under questioning from his own attorney, Steven Molo, Musk framed the stakes in existential terms. The only way to prevent artificial general intelligence from becoming dangerous, he argued, is to keep it out of the hands of companies chasing financial returns. He traced his AI safety concerns back years, recounting how Google co-founder Larry Page once called him a "speciesist" for prioritizing human welfare over potentially sentient machines. Musk also claimed he met with President Obama in 2015 specifically to warn him about AI risks.

The portrait Musk painted was of a persistent voice crying out in the wilderness. He said he had raised AI safety concerns with "anyone and everyone," only to be met with apathy. His brother's response stuck with him: "It's a buzzkill."

Yet cracks in this narrative emerged almost immediately. When asked about his own AI venture, xAI, Musk acknowledged it operates as a for-profit company, directly contradicting his claim that profit-driven AI development is inherently unsafe. He sidestepped further questioning on the contradiction by noting that SpaceX recently acquired xAI and the rocket company is currently in a quiet period before a planned public offering.

OpenAI's lead counsel, William Savitt, spent hours of cross-examination suggesting a different motive entirely. Rather than attack Musk's underlying concerns about AGI dangers, Savitt presented evidence that Musk's safety rhetoric intensifies whenever competitors rather than Musk himself control AI technology. The implicit argument: Musk is less concerned with safety than with power and profit.

Savitt also challenged Musk's self-description as a "paladin of safety and regulation," asking pointedly whether anyone other than Musk had ever publicly credited him with warning Obama about AI risks. The answer suggested the story belongs more to Musk's personal mythology than to documented history.

When Savitt pressed Musk on OpenAI's specific safety measures, Musk deflected by asserting that profit motive alone disqualifies the company from being trustworthy. Asked about OpenAI's safety documentation systems, Musk responded with what observers described as his internet troll persona, mocking the terminology and suggesting he didn't take the question seriously.

The trial also touched on a conspicuous absence: the troubling behavior of Musk's own Grok chatbot. Reports have documented instances of the tool generating racist messages, creating non-consensual deepfakes of adults, and producing explicit images of children. When Savitt hinted at these problems by noting that Grok was trained on racist and sexist content, Musk dismissed the concern, claiming that exposure to offensive material doesn't determine an AI system's behavior.

The cross-examination continues in Oakland, California. Savitt told reporters afterward that he found Musk to be a reluctant witness, suggesting the testimony revealed tensions between Musk's public safety claims and his competitive business interests.

Author James Rodriguez: "Musk's courtroom argument works only if you ignore his own for-profit AI ambitions and Grok's documented harms. The strategy seems less about convincing anyone of his safety credentials and more about creating enough doubt to undermine OpenAI's position."

Comments