+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

How do AI chatbots like ChatGPT work? Here's a quick explainer.

Oct 10, 2023, 00:14 IST
Business Insider
AI-driven chatbots make calculations and draw on extensive training, some provided by humans, to make predictions on what to say.Laurence Dutton/Getty.
  • AI chatbots like OpenAI's ChatGPT are based on large language models that are fed a lot of information.
  • They're also trained by humans who guide the system to spit out appropriate and accurate responses.
Advertisement

ChatGPT and other chatbots driven by artificial intelligence can speak in fluent, grammatically sound sentences that may even have a natural rhythm to them.

But don't be lulled into mistaking that well-executed speech for thought, emotion, or even intent, experts say.

A chatbot is, in essence, no more than a machine performing mathematical calculations and statistical analysis to call up the right words and sentences. Bots like ChatGPT are trained on large amounts of text, which allows them to interact with human users in a natural way. There's also a lot of training done by humans, who help smooth out any wrinkles.

OpenAI, the company behind ChatGPT, says on its website that its models are instructed on information from a range of sources, including from its users and material it has licensed.

Here's how AI chatbots work

AI chatbots like OpenAI's ChatGPT are based on large language models, or LLMs, which are programs trained on volumes of text obtained from published writing and information online, which is generally produced by humans.

Advertisement

The systems are trained on series of words and learn the importance of words in those series, experts said. So all of that imbibed knowledge not only trains large language models on factual information, but it helps them divine patterns of speech and how words are typically used and grouped together.

Chatbots are further trained by humans on how to provide appropriate responses and limit harmful messages.

One AI data trainer who works at Invisible Technologies, a company contracted to train ChatGPT, previously told Insider they are tasked with identifying factual inaccuracies; spelling and grammar errors; and harassment when testing the chatbot's responses.

"You can say, 'This is toxic, this is too political, this is opinion,' and frame it not to generate those things," said Kristian Hammond, a computer science professor at Northwestern University. Hammond is also the director of the university's Center for Advancing Safety of Machine Intelligence.

When you ask a chatbot to answer a simple factual question, the recall process can be straightforward: It is deploying a set of algorithms to choose the most likely sentence to respond with. It selects the best possible responses within milliseconds, and of those top choices, presents one at random. That's why asking the same question repeatedly can generate slightly different answers.

Advertisement

Chatbots can also break down questions into multiple parts and answer each part in sequence, as if thinking through the question.

Say you asked the bot to name a US president who shares the first name of the male lead actor of the movie "Camelot." The bot might answer first that the actor in question is Richard Harris. It will then use that answer to give you Richard Nixon as the answer to your original question, Hammond said.

Chatbots aren't perfect — and they can get stuff wrong

AI chatbots run into the most trouble when asked questions they don't have the answer to. They simply don't know what they don't know, so instead of refusing to answer, they extrapolate, based on what they do know, and make a guess.

The issue is that they don't tell you they're guessing — they may simply present information as fact. When a chatbot invents information that it presents to a user as factual, it's called a "hallucination."

The possibility of ChatGPT spitting out a hallucination is, in part, why some tech experts warn chatbot users to be careful. In a recent Boston Consulting Group study, researchers found that people using ChatGPT at work can actually perform worse on certain tasks if they take the chatbot's outputs at face value and don't screen them for errors.

Advertisement

"This is what we call knowledge of knowledge or metacognition," said William Wang, an associate professor teaching computer science at the University of California, Santa Barbara. He's also a co-director of the university's natural language processing group.

"The model doesn't really understand the known unknowns very well," he said.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article