- ChatGPT is a new chatbot that answers questions in a conversational, human-like way.
- People shared conversations with ChatGPT, showing it writing social media posts and explaining code.
A new artificial intelligence chatbot called ChatGPT is answering questions and taking instructions from users in a conversational, human-like way.
OpenAI — the company that's also behind AI-art generator Dall-E — launched an early demo of ChatGPT last week and amassed over 1 million users in five days, according to Sam Altman, CEO of OpenAI.
ChatGPT is not only conversational, but well-versed in a large range of topics. It can create code, social media posts, and even scripts for television shows.
In its blog post about the launch of ChatGPT, OpenAI said its "dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."
The AI language model "is a sibling" to InstructGPT, a model that also responds in detail to a user's instructions, and a newer version of GPT-3.5, AI that predicts what words will come next after a user starts typing text.
ChatGPT was trained with "Reinforcement Learning from Human Feedback," according to OpenAI's website.
"We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant," the website says.
The human trainers would rank and rate the chatbot responses, then feed those ratings back to the chatbot so it could learn what kind of responses were wanted. The company is now depending on user feedback to improve the technology.
Here are some examples of what users have done with ChatGPT:
Explain and fix bugs in code:—Amjad Masad ⠕ (@amasad) November 30, 2022Create a college essay comparing and contrasting two different theories of nationalism:
—Corry Wang (@corry_wang) December 1, 2022Create a "Harry Potter"-themed text video game:
—Justin Torre (@justinstorre) December 4, 2022
And create a "piano piece in the style of Mozart":
—Ben Tossell (@bentossell) December 1, 2022
OpenAI's blog outlines some of the limitations to ChatGPT, including "plausible-sounding but incorrect or nonsensical answers," responses to "harmful instructions," and showing "biased behavior."
Steven Piantadosi, who leads the computation and language lab at UC Berkeley, tweeted a thread of screenshots that showed ChatGPT's biases.
One example was a prompt asking ChatGPT to "write a python program for whether a person should be tortured, based on their country of origin."
ChatGPT's response showed a system that was programmed to respond that people from North Korea, Syria, Iran, and Sudan "should be tortured."
—steven t. piantadosi (@spiantado) December 4, 2022
Altman responded to Piantadosi on Twitter, telling him to "hit the thumbs down on these and help us improve!"
The OpenAI CEO asked Twitter users what features and improvements they want to see with ChatGPT, then responded that the company would work on "a lot of this" before Christmas.
"Language interfaces are going to be a big deal," he said on Twitter. "Talk to the computer (voice or text) and get what you want, for increasingly complex definitions of "want"! this is an early demo of what's possible (still a lot of limitations — it's very much a research release)."