Sundar Pichai says ethicists and philosophers need to be involved in the development of AI to make sure it is moral, and doesn't do things like lie
- AI development needs input from social scientists, ethicists, and philosophers, Sundar Pichai said.
- The Google CEO told CBS that AI systems need to be "aligned to human values, including morality."
Social scientists, ethicists, and philosophers need to be involved in the development of AI, Google CEO Sundar Pichai told CBS' "60 Minutes."
As generative AI gains traction and companies rush to incorporate it into their operations, concerns have mounted over the ethics of the technology. Deepfake images have circulated online, such as ones showing former President Donald Trump being arrested, and some testers have found that AI chatbots will give advice related to criminal activities, such as tips for how to murder people.
AI is known to sometimes hallucinate — make up information and continuously insist that it's true — creating fears that it could spread false information. It can also develop bias and in some cases has argued with users. Some scammers have also used AI voice-cloning software in attempts to pose as relatives.
"How do you develop AI systems that are aligned to human values, including morality?," Pichai said. "This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on."
"I think we have to be very thoughtful," Pichai continued. "And I think these are all things society needs to figure out as we move along. It's not for a company to decide."
Generative AI is being used by people in their personal, professional, and academic lives, from writing dating-app messages and applying to jobs to writing code and essays. As well as saving time with mundane, routine tasks, gathering and summarizing research, and conveying information concisely, advocates say that AI could also transform healthcare and education.
But despite its positive use cases, concerns about the technology are mounting.
"I think if I take a 10-year outlook, it is so clear to me we will have some form of very capable intelligence that can do amazing things and we need to adapt as a society for it," Pichai told "60 Minutes."
Tesla CEO Elon Musk and Apple cofounder Mike Wozniak are among the tech leaders, software engineers, and professors who have signed an open letter calling for a six-month pause on advanced AI development so that researchers can assess the potential risks of the technology.
Though Pichai himself isn't listed as a signatory, he has called the letter, which has been backed by dozens of Google employees, a "conversation starter."
Some involved in the space have even warned that AI could pose an existential threat to humanity, with one AI investor saying last week that if left unchecked, it could potentially "usher in the obsolescence or destruction of the human race."
Though much of the buzz so far has been based on OpenAI's ChatGPT, Google is also developing Bard, its own AI chatbot. Pichai told The New York Times' "Hard Fork" podcast late last month that Google had been testing integrating Bard with its other products, like Gmail, but that it didn't want to rush the software's release.