+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google's big fear is AI running wild. After ChatGPT, it's too late.

Jan 19, 2023, 20:48 IST
Business Insider
OpenAI
  • Google executives are warning against irresponsibly releasing AI tools.
  • It comes after rival OpenAI released buzzy chatbot ChatGPT in November.
Advertisement

It's becoming clearer that OpenAI's release of generative AI bot ChatGPT has put Google on high alert.

On Monday, some of Google's most senior executives, including CEO Sundar Pichai, senior vice president James Manyika, and the chief executive of its AI research unit DeepMind, Demis Hassabis, published an explainer on their approach to AI research, titled "Why we focus on AI (and to what end)."

"We understand that AI, as a still-emerging technology, poses various and evolving complexities and risks," the post reads. "Our development and use of AI must address these risks. That's why we as a company consider it an imperative to pursue AI responsibly."

That the post comes after months of headlines about scary-smart chatbot ChatGPT — and the havoc it's wreaking — is not a coincidence.

As ChatGPT has demonstrated, easy-to-use generative AI can be practical and productive. It can also be weaponized and has already been used to cheat on homework, write flawless phishing emails, and write custom malware. OpenAI has hardcoded content filters into ChatGPT to prevent people from using it for ill ends, but bad actors have found these guardrails relatively trivial to bypass.

Advertisement

With artificial intelligence likely to underpin much of Google's future business, the search and ads giant is sensitive to the competition and any "risky" AI rollouts that may draw more public and political scrutiny. From Google's perspective, it is both good business and moral sense to issue warnings about the careless use of AI and position itself as the responsible custodian.

Unfortunately, it's probably too late.

Competing tools like ChatGPT are a headache for Google

Google uses artificial intelligence to aid with everything from natural language queries for search to curating feeds on YouTube. These use cases, according to its post, are grounded in the firm's stated mission to "organize the world's information and make it universally accessible and useful."

But AI, which it describes as "an early-stage technology," also has evolving capabilities and uses with the potential for "misapplication, misuse, and unintended or unforeseen consequences."

A non-exhaustive list of potential problems: inaccuracies, amplifying societal biases, cybersecurity risks, and AI acting as a driver of inequality.

Advertisement

This is a "thinly veiled swipe at OpenAI and ChatGPT", according to a research note from Richard Windsor of Radio Free Mobile.

And the issue in a nutshell: "Google has a point because if ChatGPT causes repeated issues, it could undermine the public's trust in AI, which would obviously also reflect badly on Google," Windsor said.

In other words: If ChatGPT and its successors cause widespread havoc, it'll ruin AI adoption for everyone, including Google.

Professor Michael Wooldridge, director of foundational AI research at the Turing Institute, told Insider that while Google is "possibly responding to the enormous publicity that ChatGPT has got," bad outcomes in the past also give it reason to err on the side of caution.

He points to Meta's, formerly Facebook's, use of AI language model Galactica to write scientific papers that produced "very plausible looking nonsense." The model created the possibility that someone could "flood scientific journals and conferences with junk." Meta pulled the tool.

Advertisement

"It seems entirely plausible that [Google] are trying to flag up that they are the responsible operators in AI, not willing to release systems until they know they are safe and reliable," Wooldridge said.

There'll be more ChatGPT moments

There is a flood of money pouring into generative AI startups promising real-world applications.

And the same day Google published its blog post urging caution, Microsoft announced the general availability of Azure's OpenAI service in what it said would mark its "continued commitment to democratizing AI."

In practice, its announcement means more businesses using Azure can now easily access tools like ChatGPT, as well as AI image generator DALL E-2.

"ChatGPT is coming soon to the Azure OpenAI Service, which is now generally available, as we help customers apply the world's most advanced AI models to their own business imperatives," Microsoft chairman and CEO Satya Nadella said.

Advertisement

There has been talk, too, of Microsoft starting to deploy ChatGPT across its full suite of Office products.

All of this is spurring Google. In an interview with Time, DeepMind's Hassabis hinted at a private beta release of a ChatGPT-like bot called Sparrow this year, with a view to incorporating features that ChatGPT lacks.

In Windsor's view, Google seems to view OpenAI, a firm cofounded by the likes of Silicon Valley's hot-button CEO Elon Musk, as "a bunch of cowboys willing to release anything for general consumption." (Musk later gave up his stake in the startup in 2018.)

But to continue the cowboy metaphor, Google calling for restraint in AI development is trying to shut the barn door after the horse has bolted. The general public can access powerful AI tools — and there's no going back.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article