Stephen Hawking hailed a new AI research centre as 'crucial to the future of our civilisation and our species'
The £10 million Leverhulme Centre for the Future of Intelligence (LCFI), which opened this week, will aim to examine the morality and governance of AI. The centre writes on its website that it will study the impacts of this "potentially epoch-making technological development, both short and long term."
Speaking at the opening of the LCFI on Wednesday, Hawking said that AI will be "either the best, or the worst thing, ever to happen to humanity," according to The Guardian.
He also said: "I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence - and exceed it."
The world-renowned physicist told the BBC in 2014 that "the development of full artificial intelligence could spell the end of the human race." Apple cofounder Steve Wozniak, Microsoft founder Bill Gates, and Tesla founder Elon Musk made similar warnings around the same time about where the technology is going, possibly in a bid to get more people debating the topic.
However, Hawking was keen to stress in his talk on Wednesday that the technology also has huge potential to benefit our species.
"The potential benefits of creating intelligence are huge," he said. "We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one - industrialisation. And surely we will aim to finally eradicate disease and poverty.
"Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation."
LCFI, which is due to start work this month and will eventually have its own building on Mill Lane in the heart of Cambridge, wants to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world.
Led by Cambridge philosophy professor Huw Price, the centre,will work in conjunction with the university's Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity's future including climate change, disease, warfare, and artificial intelligence.
LCFI's website details a list of projects that its researchers will look at. Projects include:
- Science, value, and the future of intelligence
- Policy and responsible innovation
- The value alignment problem
- Kinds of intelligence
- Autonomous weapons - prospects for regulation
- AI: Agents and persons
Stephen Cave, director of the centre, told Business Insider: "So far, the core team is about 15 people. We are now starting to recruit post-doctoral researchers (about 12 in the first year). But not all of these people will be based in Cambridge, and not all of those who are based in Cambridge will be physically in our premises (because they already have offices)."