ZestFinance
- Douglas Merrill is the former Google chief information officer, and current CEO of ZestFinance, a financial services AI software firm.
- Speaking to Business Insider, the 49-year-old explained why he thinks AI should be regulated, and who should be responsible for the regulation.
- Merrill outlined four steps for keeping AI bias in check, at the same time as promoting the technology and protecting consumers.
- Visit BusinessInsider.com for more stories.
As some of the biggest tech companies in the world grapple with the ethics of artificial intelligence, it can often feel like it is difficult to find people with specific, actionable proposals for regulating the technology.
But one person who has strong views on how AI should be kept in check is Douglas Merrill, the former Google chief information officer, and current CEO of ZestFinance, a financial services AI software firm.
Merrill's company has partnerships with firms like Microsoft to bring greater transparency to AI models, and he says that regulation is needed to promote the technology while also protecting consumers.
"We have to know when and where it's safe to use AI," Merrill says. "We need tools to check bias in the data we feed to our AI models, and tools that explain the decisions being made inside those 'black boxes.'
"AI can only do what you tell it to do, and think only what you tell it to think. If a system produces biased or unfair results, that's more often the fault of human error or training the AI models on incorrect or biased data."
In an interview with Business Insider, Merrill set out four "actionable tips" for regulating AI:
1) Track the data being fed to an AI
Merrill says companies must keep track of the data they are analysing and feeding into their AI models. "Consumers should also have the right to say 'that data is false,' and upon being told the data is false, companies should just throw it away," he explains.
2) Explain the decisions an AI makes
"You should have to describe to customers why your AI model made the decision it just made, using the data the consumers just gave you rights to," Merrill says.
He says this could riff on the US Fair Credit Reporting Act, which was drawn up more than 40 years ago to protect fairness, accuracy, and privacy in credit reporting.
"FCRA was written in the 1970s, when it was a very different world, but the notion that you have to describe the decisions you made is the same," Merrill adds.
3) Be as transparent as possible
"You have to convince your consumers that you're doing everything you possibly can to open a window onto the processes of your AI models," he says.
"This is hard to do, but not impossible, and not all data need be revealed. I think this is where the big tech companies fall down. I don't think very many people believe that they are attempting to show most of their hand."
4) Give consumers a way to complain
"It's important for companies to have some different skin in the game, and for consumers to have some different levers to push. It could be legal recourse. It could be transactional recourse. You pick your poison," Merrill explains.
"But there's got to be some way for consumers to get recourse in the event that the AI model's misused [their data]. I think a path to recourse is starkly missing today, as you can see every time there's an information security breach."
Big tech companies are visibly wrestling with the issues provoked by AI. In April for example, Google's AI ethics board was controversially shut down a week after being founded, while in October last year, Amazon scrapped a secret AI recruitment tool that exhibited bias against women.
AP
Merrill does not think these companies are well placed to design regulation for themselves - simply because they have their own commercial imperatives.
"It's rare for incumbents to be able to understand or envision how regulation should work," he says. "That's because they're essentially trapped in the innovator's dilemma up front, which is that it's hard to tell yourself a story of how regulation's going to make you more money."
Whose job is it to regulate AI?
So who does Merrill think should bear responsibility for regulating AI?
"Complicated self-regulation tends not to work," he says. "So I think that takes [responsibility] off the shoulders of industry groups and onto the shoulders of somebody else.
"In the US, there are two possibilities. One is all the states. The other is the federal government. State-based regulation is really, really hard for tech companies as they're inherently inter-state. So, in the US, that [would make responsibility for AI regulation] a national government thing."
The European Union's General Data Protection Regulation arguably goes part of the way to realising Merrill's proposals. For example, Article 15 of the GDPR requires any company processing EU citizens' data to explain the purpose of the processing, should a citizen request one. It also requires companies to provide citizens with a free copy of the personal data they are processing on request.
But, as Merrill says, the EU cannot regulate how AI companies use the personal data of non-EU citizens within Europe.
REUTERS/Vincent Kessler
"In Europe; there's a little bit of the same problem," he says. "Does the EU count Britain? What about Switzerland? What about Turkey? Europe's even less uniform [than the US]. But in general, I think [responsibility lies with] a larger, national-style entity.
"Right now in the US, getting anything through our regulatory system is pretty challenging, because of the particular politics of the US."
Clearly, then, questions remain about how Merrill's proposals would be enforced in practice. Ultimately, though, Merrill says these questions don't stop his proposals from being correct.
"My policies are relatively radically outside the mainstream of conversation. It's obviously harder to get something which is outside the mainstream into the political discussion than it would be to get something into the discussion which is one degree off. But I think they're right.
"All AI models, like humans, make mistakes. We should set up the guardrails now to ensure AI's effective oversight without blocking or stopping it."
Get the latest Google stock price here.