What is ChatGPT, the AI chatbot everyone’s talking about

ChatGPT’s ability to generate human-like responses has generated much buzz. Here’s what the AI-based chatbot is all about.

Artificial Intelligence (AI) research company OpenAI on Wednesday announced ChatGPT, a prototype dialogue-based AI chatbot capable of understanding natural language and responding in natural language. It has since taken the internet by storm and already crossed more than a million users in less than a week. Most users are marvelling at how intelligent the AI-powered bot sounds. Some even called it a replacement for Google, since it’s capable of giving solutions to complex problems directly – almost like a personal know-all teacher.

“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI wrote on its announcement page for ChatGPT.

What ChatGPT? 

ChatGPT is based on GPT-3.5, a language model that uses deep learning to produce human-like text. However, while the older GPT-3 model only took text prompts and tried to continue on that with its own generated text, ChatGPT is more engaging. It’s much better at generating detailed text and can even come up with poems. Another unique characteristic is memory. The bot can remember earlier comments in a conversation and recount them to the user.

But even under its beta testing phase, ChatGPT’s abilities are already quite remarkable. Aside from amusing responses like the pumpkin one above, people are already finding real-world applications and use cases for the bot.

The limitations

But while many people were in awe of the abilities of the bot, some were also quick in spotting its limitations. ChatGPT is still prone to misinformation and biases, which is something that plagued previous versions of GPT as well. The model can give incorrect answers to, say, algebraic problems.  And due to the fact that it appears so confident in its super-detailed answers, people can easily be misled into believing those are true.

OpenAI understands these flaws and has noted them down on its announcement blog: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”