scorecard
  1. Home
  2. tech
  3. article
  4. Threat or not? The Godfathers of AI spar on their views about its impact

Threat or not? The Godfathers of AI spar on their views about its impact

Threat or not? The Godfathers of AI spar on their views about its impact
Will AI take over humanity? Will it be a threat in the near future? Or Will it be a blessing in disguise and make our lives easier? These are some questions that are being asked today as the world adapts to the “AI way of life.” Ever since OpenAI launched its generative AI chatbot, ChatGPT, discussions around AI’s impact on humanity have been surfacing. While a set of people believe that AI will eventually take over humanity, others are more optimistic about its impact. Not just the people, even the “Godfathers of AI”, whose work made the likes of ChatGPT possible, have different views over its impact.

Geoffrey Hinton, Yoshua Bengio, and Yann LeCun together received the 2018 Turing Award for their impressive work in deep learning and got the title of “Godfathers of AI.” While they all played pivotal roles in creating the foundations of AI, their views on its future diverge sharply.

Geoffrey Hinton is afraid of his own creation


Hinton recently made headlines for winning a Nobel Prize in Physics along with Professor John Hopfield for their work in machine learning. However, after winning the award, he had a warning for all.

“Quite a few good researchers believe that sometime in the next 20 years, AI will become more intelligent than us, and we need to think hard about what happens then,” he said as per a report in Independent.co.uk.

“If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it’s going to take over control,” he added.

This wasn’t the first time that Hinton warned the world about his own creation. Last year, he left his job at Google to speak out about his growing concerns over AI’s trajectory.

In an interview with MIT Technology Review, Hinton had explained his departure from Google as motivated by his desire to speak openly about these dangers: “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business” .

Since then, he has been candid about his anxieties surrounding the potential consequences of AI.

During the same MIT Technology Review interview, he had also said referring to AI and people’s obsession with it, “Sometimes I think it’s as if aliens had landed, and people haven’t realised because they speak very good English.”

In various other interviews, Hinton has time and again painted an increasingly alarming picture of AI’s rapid evolution. He believes that as these systems continue to develop, they may ultimately achieve a form of intelligence superior to human cognition. Hinton has also voiced concerns that we are on a path towards a reality where AI outpaces human control.

While there have been ample discussions about the potential of people losing their jobs due to AI, for Hinton, AI poses threats that go much beyond that. He worries about the existential risk it might present.

In an interview with CBS News’ 60 Minutes, Hinton had said that AI will be able to manipulate people in future as the technology has learnt from all the books that were ever written.

"They will be able to manipulate people, right?" he had said and added, "And these will be very good at convincing people because they'll have learned from all the novels that were ever written — all the books by Machiavelli, all the political connivances, they'll know all that stuff. They'll know how to do it."

At the same time, he has also said that the AI threat might not materialise at all.

In response to an inquiry from Business Insider’s Jordan Hart, Hinton had said that it could be anywhere from five to 20 years before AI might pose a real risk. He also acknowledged the possibility that such a threat may never come to pass, depending on how advancements in AI are managed.

Hinton has also drawn comparisons between AI’s threat and that of nuclear weapons, underscoring his conviction that a concerted, international regulatory approach might be essential for curbing AI’s potential dangers.

In an interview with Reuters last year, Hinton described AI as a potential risk even more pressing than climate change. While he underscored the importance of addressing climate issues, he argued that AI’s potential hazards might demand faster action. He noted that climate change solutions are relatively well understood—cutting carbon emissions can gradually mitigate the problem. However, with AI, the path forward remains unclear, leaving scientists and policymakers without a straightforward solution.

Meta’s top AI scientist is in favour of AI


Yann LeCun was also the recipient of the Turing award along with Hinton. Now working as Meta’s chief AI scientist, LeCun once used to be a student of Hinton. However, his views on AI’s impact are in stark contrast to that of his former teacher.

LeCun has remained an outspoken advocate for AI, dismissing warnings of AI-induced existential threats and asserting that fears of AI surpassing humanity are “ridiculous.”

LeCun, who joined Meta in 2013 as the founding director of its AI research lab, has been quick to dismiss the notion that AI will outpace human control or evolve into a “rogue intelligence.” He has argued that these fears stem more from human imagination than from technical reality.

“If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17-year-old,” LeCun quipped in an X post in response to Elon Musk’s fears about AI surpassing human intelligence .

LeCun’s optimistic outlook is shaped by his belief that current AI systems, including large language models like ChatGPT and Meta’s own Llama 2, are still far from developing a genuine form of understanding.

According to LeCun, fears of a “hard take-off”—where AI suddenly achieves runaway intelligence—are misguided.

In an interview with Wired last year, he said that fear around AI was being “exploited” and that every revolutionary technology is scary at first.

He had said, “AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it. That's a mistake we made with other technologies that revolutionized the world. Take the invention of the printing press in the 15th century. The Catholic Church hated it, right? People were going to be able to read the Bible themselves and not talk to the priest. Pretty much all the establishment was against the wide use of the printing press because it would change the power structure. They were right—it created 200 years of religious conflict. But it also brought about the Enlightenment.”

Moreover, speaking at a Meta event in Paris, LeCun termed fears around AI as “preposterously ridiculous” and said that AI is a promising tool, not an existential risk.

According to him, while AI may eventually surpass human intelligence, this is likely decades away. He told BBC that it would be a “huge mistake” to restrict AI research, arguing that concerns stem from a lack of understanding on how to make AI safer.

While LeCun dismisses the notion of AI becoming uncontrollably intelligent, he does support the development of open-source AI, which he believes could democratize the technology and mitigate risks associated with centralized control.

In a letter to President Joe Biden, LeCun and other AI scientists advocated for an open-source approach, contending that AI should not be monopolized by a select few companies.

The future of AI


Thus, the two Godfathers of AI have opposite views on AI and their views echo larger tensions in the tech industry. The rapid advancement of generative AI tools has rekindled discussions over whether governments should intervene to regulate AI’s development. The US, the EU, and other countries have proposed frameworks aimed at governing AI, with some leaders calling for bans on AI in military applications or strict controls on its use in sensitive sectors.

While Hinton is a strong advocate for such restrictions, LeCun cautions against excessive regulation, arguing that too many constraints could stifle innovation. In an interview with BBC, LeCun had said that “keeping AI research under lock and key” would be a mistake. .

The disagreement also extends to how AI’s impact on employment and societal roles is perceived. While Hinton warns of mass job displacement as machines take over increasingly complex tasks, LeCun remains unperturbed by such projections. He often argues that AI will complement rather than replace human work, citing past technological revolutions, like the industrial and digital ages, as examples of how new technology can create more opportunities than it displaces.

READ MORE ARTICLES ON



Popular Right Now



Advertisement