But the collision between Musk’s Twitter takeover and the chaos at OpenAI reveals something even bigger than the shifting tectonic plates of social media: the extent of the society-shaping power wielded by a very small group of Silicon Valley titans.
One of Sam Altman’s co-founders at OpenAI was, after all… Elon Musk. Musk turned his back on the project due to a complex disagreement with Altman over their progress compared to Google DeepMind, as well as his own personal beliefs on AI development, and has now launched his own AI company .
Meanwhile, Greg Brockman, the ousted president of OpenAI alongside Altman, was the first CTO of Stripe, which raised funding from Musk and fellow OpenAI co-founder Peter Thiel. Musk also tried to use his huge account on intervene personally in the OpenAI drama this weekend, tweeting directly to the company’s board member and chief scientist Ilya Sutskever, someone he said, “has a good moral compass and does not seek power. »
And if you think OpenAI’s governance shakeup is an opportunity to sever those ties, think again. OpenAI’s new CEO Emmett Shear is a visiting partner at Y Combinator, the startup accelerator where Sam Altman was once president. It still serves as the business and social hub of Silicon Valley, and two of its co-founders were Jessica Livingston and Trevor Blackwell, also – you guessed it – co-founders of OpenAI. Are you still following?
With “how to govern AI” still Topic A in Washington political space (or almost), the explosion that hit AI’s hottest company highlights a particularly thorny challenge for regulators trying to shape the future.
Individual personalities – and individual fortunes – matter far more in the world of Silicon Valley startups than in the traditional, more consensus-oriented bureaucracies of American business. Once, corporate names like Morgan, Rockefeller and Ford ran national policy from their boardroom seats, a version of America we might have thought had been put to rest. Not in tech: Today we take for granted that Bezos, Zuckerberg and Musk are more or less synonymous with their corporate empires. (Perhaps it’s the fault of Steve Jobs, the charismatic Apple founder and world creator who towers over them all in the minds of business creators.)
Large organizations move slowly and respond to the rules. Startup titans, not so much. It’s extremely difficult to imagine more established tech giants like IBM or Microsoft changing their business model on a whim or personal passion, as with Musk’s crusade for free speech on Twitter, or Mark’s sudden commitment Zuckerberg in the metaverse, or Altman’s belief in human-like AI. superintelligence.
OpenAI, in particular, was intended to fulfill a larger mission under its unconventional nonprofit structure, but it has become clear how much of the company is shaped by a single person, its ousted CEO. Samuel Hammond, senior economist at the Foundation For American Innovation and a blogger specializing in AI and governance, calls it a “sectarian and borderline messianic employee culture, as evidenced by their willingness to all quit in solidarity,” quoting social media reports that Altman personally interviewed every new hire at the company, a philosophy he previously championed in a blog post.
He described to me how Sam Altman’s personal beliefs came to define the company, and thus the broader existential debates around the potential existence (and risk) of superhuman “AGI” or general artificial intelligence.
“Over the past year, Altman reoriented OpenAI to be even more mission-driven, changing its core values to emphasize that anything that didn’t advance AGI was ‘out of reach’,” Hammond said.
The lesson not only for America, but humanity at large is that a very small group of people have managed to exert total, personalized control over many systems, from Musk’s social media platform to machines Altman’s intelligence services, which shape the present and future of society.
Regulators and critics have proposed strategies to rein in this influence, from the European Union’s elaborate regulatory regime to Federal Trade Commission Chair Lina Khan’s conviction in enforcing antitrust laws to some proposals aimed at emulating the governance of Silicon Valley itself.
None have yet succeeded. Consumer Financial Protection Bureau Director Rohit Chopra told Morning Money’s Sam Sutton this week that the dawn of powerful AI is creating new urgency for tech regulators: “There’s a race to develop the fundamental models of AI. There probably won’t be tons of these models. It could actually be a natural oligopoly,” he said. “The fact that big tech companies now straddle major fundamental models of AI adds even more questions about what we do to ensure they don’t have outsized power.”
Personal dominance in Silicon Valley has had major, well-documented ramifications for the era of startup culture dominated by app-based social media and connectivity companies like Facebook or Uber. There will be even more of it in the age of AI, where, realistic or not, the discourse is characterized by arguments about the very fate of humanity.
Politices