Open letter from Elon Musk and Steve Wozniak to AI developers: Stop AI experiments

4 mins read

More than 1,000 people, including AI experts, technologists and businesspeople, have asked AI labs to stop training systems “more powerful than GPT-4.”

Signatories to the open letter, published on Wednesday by the Future of Life Institute, a non-profit organization that aims to reduce the risks of transformative technology, include Apple co-founder Steve Wozniak; Elon Musk, CEO of SpaceX, Tesla and Twitter; Emad Mostaque, CEO of Stability AI; Tristan Harris, director of the Center for Humane Technology; and Yoshua Bengia, founder of AI research institute Mila.

We see AI labs racing to create the most powerful digital minds that no one can understand or control, including their own creators


[su_posts posts_per_page=”1″ tax_term=”22938″ offset=”1″ order=”desc” orderby=”id” post_status=”any” ignore_sticky_posts=”yes”]


The letter reads as follows: “Advanced AI represents a significant change for life on Earth and must be planned and managed with equal care and resources. Unfortunately, this level of planning and management does not yet exist, and yet in recent months we have seen AI labs racing to create the most powerful digital minds that no one, including their own creators, can understand, predict or reliably control.”

Should we develop non-human minds that will eventually outnumber us, outsmart us, render us dysfunctional and replace us?

Companies like OpenAI, Microsoft and Google are constantly seeking breakthroughs with generative AI models. Driven by the ambition to gain a foothold in the AI market, companies announce new developments and products almost every day. But all this is happening too fast, the letter says, without taking ethical, safety and regulatory concerns seriously:

“We must ask ourselves: Should we allow machines to flood our information sources with propaganda and untruths? Should we leave all jobs to machines, even satisfying ones? Should we develop non-human minds that will eventually outnumber us, outsmart us, render us dysfunctional, and replace us? Should we risk losing control of our civilization?”

‘Take a 6-month break’

The open letter calls for a six-month pause in AI experiments: “AI development should resume only after we are confident that its risks are controllable and its effects will be positive.”

AI labs should use this time to develop security protocols that can be observed by third parties, the letter says. If AI labs do not stop, it calls for governments to step in and impose a moratorium.

The letter also calls on lawmakers to monitor and surveil AI systems, to distinguish between real and AI-generated content, to regulate and license AI models, to establish legal liability for “AI-induced harm”, to allocate public resources for AI research, and to establish institutes (especially in democracies) to deal with the dramatic economic and political disruptions that AI can cause.

The letter does not say that AI development should stop altogether. It only warns of some brakes. Societies need a small bandwagon for the AI revolution to be safe and beneficial for all. The letter concludes: “Let’s enjoy the long AI summer. Let’s not enter the fall unprepared.”

FİKRİKADİM

The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.