Pessimists and optimists battle for control of artificial intelligence

Sam Altman incident points to deep chasm in tech world

12 mins read

Despite the speed of the tech world, the events of last weekend were unprecedented. On Friday, November 17, the board of OpenAI, a company at the forefront of the artificial intelligence revolution, abruptly dismissed Sam Altman, the company’s co-founder and CEO. It is not yet clear why the board lost confidence in Altman. But rumors point to concerns about his side projects, as well as concerns that he moved too quickly to expand OpenAI’s commercial offerings without considering the public safety implications at a company that promised to develop technology for the ‘maximum benefit of humanity.’ “Pessimists believe the unleashing of artificial intelligence poses an existential threat to humanity and call for strict regulations.”

For two days after his ouster, the company’s investors and some employees called for Altman’s reinstatement. But the board stuck to its position. In the final hours of Sunday, November 19, Emmett Shear, former CEO of live streaming platform Twitch, was appointed interim CEO. Even more surprising, Microsoft CEO Satya Nadella, one of OpenAI’s biggest investors, posted on the X platform (formerly Twitter) the next day, “Mr. Altman and a group of OpenAI employees will join the software giant to lead a new, advanced AI research team.”

“Pessimists believe the unleashing of AI poses an existential threat to humanity and are calling for strict regulation.”

Pessimists and optimists

The events at OpenAI are undoubtedly the most dramatic manifestation yet of the broad divide in Silicon Valley. On the one hand, there are the ‘pessimists’ who believe that the unleashing of AI poses an existential threat to humanity and therefore call for stricter regulation. On the other, there are ‘optimists’ who downplay fears of an AI-induced doomsday scenario and emphasize its potential to accelerate progress. The party using its influence can encourage or prevent stricter regulations. This determines who will benefit the most from AI in the future.

OpenAI’s corporate structure makes it possible to become a party. The company was founded in 2015 as a non-profit organization. Three years later it created a for-profit subsidiary to fund the expensive computing and brainpower needed to advance the technology. So the competing goals of pessimists and optimists will always be difficult to achieve.

To some extent, this division reflects philosophical differences. Many in the pessimists’ camp are influenced by ‘effective altruism’, a movement that worries that AI will destroy all of humanity. Among those worried is Dario Amodei, who left OpenAI to found another modeling company, Anthropic. Other big tech companies, including Microsoft, are among those concerned about the security of AI, but they are not among the pessimists.

Optimists advocate a worldview called ‘active solidarity’, which argues that AI development should not only be allowed unhindered but also accelerated. The invitation is led by Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz. Other experts in AI research also seem sympathetic to the cause. Yann LeCun and Andrew Ng of Meta and a number of startups, including Hugging Face and Mistral AI, are advocating for less restrictive regulations.

“Microsoft’s leadership is largely based on its massive investment in OpenAI, and Amazon plans to invest nearly four billion dollars in Anthropic.”

Altman’s position

Altman seems to sympathize with both groups and calls on the public to build ‘guardrails’ to make AI safe. At the same time, he is pushing OpenAI to develop more robust models and introduce new tools, such as an app store where users can build their own chatbots. Microsoft, its largest investor, which has poured more than $10 billion into OpenAI in exchange for a 49 percent stake without gaining a seat on the parent company’s board, is said to have been upset to learn that Altman was let go without notice. This may be why Microsoft offered Altman and his colleagues the work it had planned.

But it seems that there is more to what is going on than abstract philosophy. In fact, the two groups are also divided commercially. The pessimists are the first to enter the AI race. So they have more money and prefer private equity models. The optimists, on the other hand, are the small companies that are catching up, often favoring open source software.

Let’s start with the first winners. OpenAI’s ChatGPT app reached 100 million users just two months after its launch, followed by Anthropic, founded by OpenAI defectors and now valued at $25 billion. An original paper was written by researchers at Google about big language models – software that trains on large amounts of data and supports all chatbots, including ChatGPT. Google is building larger, smarter models as well as a chatbot called Bard.

Microsoft’s lead, meanwhile, is largely based on its massive investment in OpenAI, and Amazon plans to invest nearly four billion dollars in Anthropic. But being a first mover in technology doesn’t always guarantee success. In a market where technology and demand are advancing rapidly, new entrants have great opportunities to disrupt existing companies.

This could further strengthen the pessimists’ urge to impose stricter rules. In testimony to the US Congress last May, Altman expressed concern that the sector “could do great harm to the world” and urged policymakers to enact specific regulations for AI. That same month, a group of 350 AI scientists and executives at tech companies, including OpenAI, signed the agreement. Anthropic and Google issued a one-line warning about the ‘extinction risk’ posed by AI, alongside nuclear war and pandemics. Despite the dire predictions, none of the companies that endorsed the statement stopped their work on building more capable AI models.

Sam Altman, programmer, founder and CEO of OpenAI
Sam Altman, programmer, founder and CEO of OpenAI

Politics and companies

Politicians are trying to show that they take the dangers seriously. In July, President Joe Biden’s administration called on seven major model makers, including Microsoft, OpenAI, Meta and Google, to make “voluntary commitments” to submit their AI-based products to experts for review before making them available to the public. On November 1, the British government asked a similar group to sign another non-binding agreement that would allow regulators to examine the reliability of AI-based products and their harmful potential, such as endangering national security. Days before, Biden had issued a much stronger presidential decree. It requires any AI company that builds models above a certain size, determined by the computing power required by the software, to report the results of its security tests to the government.

Another dividing line between the two groups is the future of open source AI. Major language models such as OpenAI, Anthropic and Google are proprietary or open source. The February release of Llama, a model created by Meta, spurred activity in the open source AI space. Proponents argue that open source models are safer because they are open to scrutiny. Critics worry that making these powerful AI models publicly available allows bad actors to use them for malicious purposes.

“So far, regulators seem to be favorably disposed to the pessimists’ claims. Biden’s presidential decree could block the use of open-source AI.”

But the open source debate may also reflect commercial motives. Venture capital firms, for example, may want it, perhaps because they are looking for a way to capture the new development activity of the startups they back, or to gain free access to models. Existing companies may fear the threat of competition. A document written by Google insiders and leaked in May said that open source models achieved results comparable to their proprietary counterparts on some tasks, and at a much lower cost. The document concluded that neither Google nor OpenAI had any defense moat against open source competitors.

So far, regulators seem to be favorably disposed to the pessimists’ claims. Biden’s presidential decree could block the use of open source AI. Its broad definition of ‘dual-use’ models, which can have military or civilian purposes, imposes complex reporting requirements on their makers. It is unclear how enforceable these rules are today. But they could gain strength over time, for example if new laws are passed.

Not all big tech companies are on opposite sides of the dividing line. Meta’s decision to open their model to AI has made them an unlikely champion for startups, giving them access to a powerful model on which to build innovative products. Meta believes that the increased innovation brought by open source tools will eventually help it produce new forms of content that engage its users and keep advertisers happy. Apple is another extreme example. The world’s largest tech company has been remarkably quiet on AI. When the company launched a new iPhone in September, it showcased many AI-powered features without putting a term to it.

This sounds smart. The catastrophic collapse of OpenAI shows the damage that wars based on humanitarian principles can do to AI. But it is these wars that will shape how the technology progresses, how it is organized, who wins the spoils.

FİKRİKADİM

The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.