+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT

Jan 30, 2023, 23:17 IST
Business Insider
ChatGPT, a AI chat bot, has gone viral in the past two weeks.Getty Images
  • A Princeton professor told The Markup that "bullshit generator" ChatGPT merely presents narratives.
  • He said it can't be relied on for accurate facts, and that it's unlikely to spawn a "revolution."
Advertisement

A professor at Princeton researching the impact of artificial intelligence doesn't believe that OpenAI's popular bot ChatGPT is a death knell for industries.

While such tools are more accessible than ever, and can instantaneously package voluminous information and even produce creative works, they can't be trusted for accurate information, Princeton professor Arvind Narayanan said in an interview with The Markup.

"It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not," he said.

Experts who study AI have said that products like ChatGPT, which are part of a category of large language model tools that can respond to human commands and produce creative output, work by simply making predictions about what to say, rather than synthesizing ideas like human brains do.

Narayanan said this makes ChatGPT more of a "bullshit generator" that presents its response without considering the accuracy of its responses.

Advertisement

But there are some early indications for how companies will adopt this type of technology.

For instance, Buzzfeed, which in December reportedly laid off 12% of its workforce, will use OpenAI's technology to help make quizzes, according to the Wall Street Journal. The tech reviews site CNET published AI-generated stories and had to correct them later, The Washington Post reported.

Narayanan cited the CNET case as an example of the pitfalls of this type of technology. "When you combine that with the fact that the tool doesn't have a good notion of truth, it's a recipe for disaster," he told The Markup.

He said that a more likely outcome of large language model tools would be industries changing in response to its use, rather than being fully replaced.

"Even with something as profound as the internet or search engines or smartphones, it's turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution," he told The Markup. "I don't think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a 'sky is falling' kind of issue."

Advertisement

The Markup's full interview with Narayanan is worth reading, which you can do here.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article