Geoffrey Hinton says the dangers of chatbots were ‘quite scary’ and warns they could be exploited by ‘bad actors.’- He has worked for Google on AI development for ten years, but now regrets his involvement with the technology.
- Geoffrey Hinton is an eminent figure in artificial intelligence and machine learning.
In a recent tweet, Hinton explained that he left Google so that he could speak openly about the risks associated with AI. He clarified that he did not resign due to any criticism of Google. In fact, he praised the tech giant for its responsible approach to AI.
Hinton highlighted some of the dangers posed by AI chatbots, including the possibility that they could become more intelligent than humans and be exploited by ‘bad actors.’ He also expressed concerns about the ‘existential risk’ of AI becoming more intelligent than humans.
Geoffrey Hinton is an eminent figure in the field of artificial intelligence and machine learning. He holds a BA in Experimental Psychology from Cambridge and a PhD in Artificial Intelligence from Edinburgh. He has held various academic positions at prestigious institutions, including the University of Toronto, where he is now an emeritus distinguished professor. He has also worked for Google since 2013. Hinton's contributions to neural network research include the development of back-propagation, Boltzmann machines, and deep belief nets. His research in deep learning has had a significant impact on speech recognition and object classification.
Hinton has received numerous awards, including the first David E. Rumelhart prize and the IEEE James Clerk Maxwell Gold medal. He is a fellow of several scientific societies and has received honorary doctorates from various universities.
Geoffrey Hinton's recent decision to speak out against AI-powered chatbots comes as concerns are growing among lawmakers, advocacy groups, and tech insiders about the potential for these tools to spread misinformation and displace jobs. ChatGPT's popularity last year has spurred a race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft, and Google are at the forefront of this trend, and other companies like IBM, Amazon, Baidu, and Tencent also working on similar technologies.
In March, a group of experts, including tech mogul Elon Musk, signed an open letter urging for a pause in developing AI chatbots that were more advanced than ChatGPT until robust safety measures could be implemented. Yoshua Bengio, another prominent figure in AI and a co-winner of the 2018 Turing Award, also signed the letter, citing the unexpected acceleration in AI systems. However, Dr. Geoffrey Hinton disagreed, stating that he believed AI would provide more benefits than risks in the short term. He added that a pause would be difficult due to international competition and that it was the government's responsibility to regulate the development of AI.
When talking to BBC, Dr Hinton emphasized that he did not intend to criticize Google and praised the company for being ‘very responsible.’ "I actually want to say some good things about Google," he said. "And they're more credible if I don't work for Google."
Google CEO Sundar Pichai recently admitted in an interview that he needed to fully understand everything the company's AI chatbot, Bard, could do. The concern is that we are currently on a speeding train, and there is a worry that one day it will start building its own tracks.
SEE ALSO:
What are ‘vishing scams’ that start with a Namaste and end with monetary losses
Sony WH-CH720N headphones review: Featherweight comfort and impressive sound