When Is AI Gonna Kill Us

Published: 2023-06-01 00:00:00

Arbitrage Blog Image

Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the statement posted online. Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website. Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

The latest warning was intentionally succinct - just a single sentence - to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based Center for AI Safety. "There's a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority," Hendrycks said. "So we had to get people to sort of come out of the closet, so to speak, on this issue, because many were sort of silently speaking among each other."

More than 1,000 researchers and technologists, including Elon Musk and Apple co-founder Steve Wozniak, had signed a much longer letter earlier this year calling for a six-month pause on AI development. The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" - from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction. It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control." "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter says. "This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war. Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI's similar generator known as DALL-E.

Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year. While there are concerns about the emergence of highly advanced and potentially malevolent AI, the focus of worry among some signatories is not on "superhuman" AI. Instead, they recognize that tools like ChatGPT, although impressive, are essentially text generators that predict suitable words based on vast amounts of ingested written material. Gary Marcus, a professor emeritus at New York University and a signatory of the letter, expressed his disagreement with those who primarily fear the near-future existence of intelligent machines capable of self-improvement beyond human control. His greater concern lies with the proliferation of "mediocre AI" that can be widely deployed, enabling criminals or terrorists to deceive people or propagate harmful misinformation. Marcus emphasized that current technology already poses significant risks for which we are unprepared. He expressed apprehension that future advancements could exacerbate these dangers.

Like this article? Share it with a friend!