OpenAI was a research lab — now it’s just another tech company

OpenAI was a research lab — now it’s just another tech company

Here’s the thing about asking investors for money: they want to see returns.

OpenAI launched with a famously altruistic mission: to help humanity by developing artificial general intelligence. But along the way, it became one of the best-funded companies in Silicon Valley. Now, the tension between those two facts is coming to a head. 

Weeks after releasing a new model it claims can “reason,” OpenAI is barreling toward dropping its nonprofit status, some of its most senior employees are leaving, and CEO Sam Altman — who was once briefly ousted over apparent trust concerns — is solidifying his position as one of the most powerful people in tech.

On Wednesday, OpenAI’s longtime chief technology officer, Mira Murati, announced she’s leaving “to create the time and space to do my own exploration.” The same day, chief research officer Bob McGrew and VP of post training Barret Zoph said they would depart as well. Altman called the leadership changes “a natural part of companies” in an X post following Murati’s announcement.

“I obviously won’t pretend it’s natural for this one to be so abrupt, but we are not a normal company,” Altman wrote.

But it follows a trend of departures that’s been building over the past year, following the failed attempt by the board to fire Altman. OpenAI cofounder and chief scientist Ilya Sutskever, who delivered Altman the news of his firing before publicly walking back his criticism, left OpenAI in May. Jan Leike, a key OpenAI researcher, quit just days later, saying that “safety culture and processes have taken a backseat to shiny products.” Nearly all OpenAI board members at the time of the ouster, except Quora CEO Adam D’Angelo, have resigned, and Altman secured a seat.

The company that once fired Altman for being “not consistently candid in his communication” has since been reshaped by him.

No longer just a “donation”

OpenAI started as a nonprofit lab and later grew a for-profit subsidiary, OpenAI LP. The for-profit arm can raise funds to build artificial general intelligence (AGI), but the nonprofit’s mission is to ensure AGI benefits humanity. 

In a bright pink box on a webpage about OpenAI’s board structure, the company emphasizes that “it would be wise” to view any investment in OpenAI “in the spirit of a donation” and that investors could “not see any return.”

See also  Remember Steam Machines? EmuDeck founder revisits Valve’s TV console idea

Investor profits are capped at 100x, with excess returns supporting the nonprofit to prioritize societal benefits over financial gain. And if the for-profit side strays from that mission, the nonprofit side can intervene.

We’re way past the “spirit of a donation” here

Reports claim OpenAI is now approaching a $150 billion valuation — about 37.5 times its reported revenue — with no path toward profitability in sight. It’s looking to raise funds from the likes of Thrive, Apple, and an investment firm backed by the United Arab Emirates, with a minimum investment of a quarter-million dollars.

OpenAI doesn’t have deep pockets or existing established businesses like Google or Meta, which are both building competing models (though it’s worth noting that these are public companies with their own responsibilities to Wall Street.) Fellow AI startup Anthropic, which was founded by former OpenAI researchers, is nipping at OpenAI’s heels while looking to raise new funds at a $40 billion valuation. We’re way past the “spirit of a donation” here. 

OpenAI’s “for-profit managed by a non-profit” structure puts it at a moneygrubbing disadvantage. So it made perfect sense that Altman told employees earlier this month that OpenAI would restructure as a for-profit company next year. This week, Bloomberg reported that the company is considering becoming a public benefit corporation (like Anthropic) and that investors are planning to give Altman a 7 percent stake. (Altman almost immediately denied this in a staff meeting, calling it “ludicrous.”)

And crucially, in the course of these changes, OpenAI’s nonprofit parent would reportedly lose control. Only a few weeks after this news was reported, Murati and company were out.

Both Altman and Murati claim that the timing is only coincidental and that the CTO is just looking to leave while the company is on the “upswing.” Murati (through representatives) declined to speak to The Verge about the sudden move. Wojciech Zaremba, one of the last remaining OpenAI cofounders, compared the departures to “the hardships parents faced in the Middle Ages when 6 out of 8 children would die.”

Whatever the reason, this marks an almost total turnover of OpenAI leadership since last year. Besides Altman himself, the last remaining member seen on a September 2023 Wired cover is president and cofounder Greg Brockman, who backed Altman during the coup. But even he’s been on a personal leave of absence since August and isn’t expected to return until next year. The same month he took leave, another cofounder and key leader, John Schulman, left to work for Anthropic.

See also  Rivian’s new update could make your EV feel like Knight Rider (or Back to the Future)

When reached for comment, OpenAI spokesperson Lindsay McCallum Rémy pointed The Verge to previous comments made to CNBC.

And no longer just a “research lab”

As Leike hinted at with his goodbye message to OpenAI about “shiny products,” turning the research lab into a for-profit company puts many of its long-term employees in an awkward spot. Many likely joined to focus on AI research, not to build and sell products. And while OpenAI is still a nonprofit, it’s not hard to guess how a profit-focused version would work.

Research labs work on longer timelines than companies chasing revenue. They can delay product releases when necessary, with less pressure to launch quickly and scale up. Perhaps most importantly, they can be more conservative about safety.

There’s already evidence OpenAI is focusing on fast launches over cautious ones: a source told The Washington Post in July that the company threw a launch party for GPT-4o “prior to knowing if it was safe to launch.” The Wall Street Journal reported on Friday that the safety staffers worked 20-hour days and didn’t have time to double-check their work. The initial results of tests showed GPT-4o wasn’t safe enough to deploy, but it was deployed anyway.

Meanwhile, OpenAI researchers are continuing to work on building what they consider to be the next steps toward human-level artificial intelligence. o1, OpenAI’s first “reasoning” model, is the beginning of a new series that the company hopes will power intelligent automated “agents.” The company is consistently rolling out features just ahead of competitors — this week, it launched Advanced Voice Mode for all users just days before Meta announced a similar product at Connect.

So, what is OpenAI becoming? All signs point to a conventional tech company under the control of one powerful executive — exactly the structure it was built to avoid. 

“I think this will be hopefully a great transition for everyone involved and I hope OpenAI will be stronger for it, as we are for all of our transitions,” Altman said onstage at Italian Tech Week just after Murati’s departure was announced.

Source link

Technology