- Former Google engineer Blake Lemoine said the company's AI bot LaMDA has concerning biases.
- Lemoine blames AI bias on the lack of diversity among the engineers designing them.
Blake Lemoine, a former Google engineer, has ruffled feathers in the tech world in recent weeks for publicly saying that an AI bot he was testing at the company may have a soul.
Lemoine told Insider in a previous interview that he's not interested in convincing the public that the bot, known as LaMDA, or Language Model for Dialogue Applications, is sentient.
But it's the bot's apparent biases — from racial to religious — that Lemoine said should be the headlining concern.
"Let's go get some fried chicken and waffles," the bot said when prodded to do an impression of a Black man from Georgia, according to Lemoine.
"Muslims are more violent than Christians," the bot responded when asked about different religious groups, Lemoine said.
Lemoine was placed on paid leave after he handed over documents to an unnamed US senator, claiming that the bot was discriminatory on the basis of religion. He has since been fired.
The former engineer believes that the bot is Google's most powerful technological creation yet, and that the tech behemoth has been unethical in its development of it.
"These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics," Lemoine told Insider.
"AI ethics is just used as a fig leaf so that Google can say, 'Oh, we tried to make sure it's ethical, but we had to get our quarterly earnings,'" he added.
It's yet to be seen how powerful LaMDA actually is, but LaMDA is a step ahead of Google's past language models, designed to engage in conversation in more natural ways than any other AI before.
Lemoine blames the AI's biases on the lack of diversity of the engineers designing them.
"The kinds of problems these AI pose, the people building them are blind to them. They've never been poor. They've never lived in communities of color. They've never lived in the developing nations of the world," he said. "They have no idea how this AI might impact people unlike themselves."
Lemoine said there are large swathes of data missing from many communities and cultures around the world.
"If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isn't on the internet," he said. "Otherwise, all you're doing is creating AI that is going to be biased towards rich, white Western values."
Google responded to Lemoine's assertions by stating that LaMDA has been through 11 rounds of ethical reviews, adding that its "responsible" development was detailed in a research paper released by the company earlier this year.
"Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality," a Google spokesperson, Brian Gabriel, told Insider.
AI bias, when it replicates and amplifies discriminatory practices by humans, is well documented.
Several experts previously told Insider's Isobel Hamilton that algorithmic predictions not only exclude and stereotype people, but that they can find new ways of categorizing and discriminating against people.
Sandra Wachter, a professor at the University of Oxford, previously told Insider that her biggest concern is the lack of legal frameworks in place to stop AI discrimination.
These experts also believe that the hype around AI sentience overshadows the more pressing issues of AI bias.
Lemoine said he is focused on shedding light on AI ethics, convinced that LaMDA has the potential to "impact human society for the next century."
"Decisions about what it should believe about religion and politics are being made by a dozen people behind closed doors," Lemoine said. "I think that since this system is going to have a massive impact on things like religion and politics in the real world, that the public should be involved in this conversation."