- DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review.
- He said that we should rule out AI's capacity to update its own code without oversight.
The rapid development of AI has raised questions about whether we're programming our own demise. As AI systems become more powerful, they could pose a greater risk to humanity if AI's goals suddenly stop aligning with ours.
To avoid that kind of doomsday scenario, Mustafa Suleyman, the co-founder of Google's AI division DeepMind, said there are certain capabilities we should rule out when it comes to artificial intelligence.
In a recent interview with the MIT Technology Review, Suleyman suggested that we should rule out "recursive self-improvement," which is the ability for AI to make itself better over time.
"You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. "Maybe that should even be a licensed activity — you know, just like for handling anthrax or nuclear materials."
And while there's been a considerable focus on AI regulation at an institutional level — just last week, tech execs including Sam Altman, Elon Musk, and Mark Zuckerberg gathered in Washington for a closed-door forum on AI — Suleyman added that it's important for people to set limits around how their personal data is used, too.
"Essentially, it's about setting boundaries, limits that an AI can't cross," he told the MIT Technology Review, "and ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs — or with humans — to the motivations and incentives of the companies creating the technology."
Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable."
And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event. He told the publication that "there's like 101 more practical issues" we should be focusing on — from privacy to bias to facial recognition to online moderation.
Suleyman is just one among several experts in the field sounding off about AI regulation. Demis Hassabis, another DeepMind cofounder, has said that developing artificial general intelligence technologies should be done "in a cautious manner using the scientific method" and involving rigorous experiments and testing.
And Microsoft CEO Satya Nadella has said that the way to avoid "runaway AI" is to make sure we start with using it in categories where humans "unambiguously, unquestionably, are in charge."
Since March, almost 34,000 people including "godfathers of AI" like Geoffrey Hinton and Yoshua Bengio have also signed an open letter from non-profit Future of Life Institute calling for AI labs to pause training on any technology that's more powerful than OpenAI's GPT-4.