What is Generative AI, the technology behind OpenAI’s ChatGPT?

WHAT IS GENERATIVE AI?

Like other forms of artificial intelligence, generative AI learns how to take actions from past data. It creates brand new content – a text, an image, even computer code – based on that training, instead of simply categorizing or identifying data like other AI.

The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.

GPT-4, a newer model that OpenAI announced this week, is “multimodal” because it can perceive not only text but images as well. OpenAI’s president demonstrated on Tuesday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.

WHAT IS IT GOOD FOR?

Demonstrations aside, businesses are already putting generative AI to work.

The technology is helpful for creating a first-draft of marketing copy, for instance, though it may require cleanup because it isn’t perfect. One example is from CarMax Inc (KMX.N), which has used a version of OpenAI’s technology to summarize thousands of customer reviews and help shoppers decide what used car to buy.

Generative AI likewise can take notes during a virtual meeting. It can draft and personalize emails, and it can create slide presentations. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.

WHAT’S WRONG WITH THAT?

A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo

Nothing, although there is concern about the technology’s potential abuse.

School systems have fretted about students turning in AI-drafted essays, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.

At the same time, the technology itself is prone to making mistakes. Factual inaccuracies touted confidently by AI, called “hallucinations,” and responses that seem erratic like professing love to a user are all reasons why companies have aimed to test the technology before making it widely available.

IS THIS JUST ABOUT GOOGLE AND MICROSOFT?

Those two companies are at the forefront of research and investment in large language models, as well as the biggest to put generative AI into widely used software such as Gmail and Microsoft Word. But they are not alone.

Large companies like Salesforce Inc (CRM.N) as well as smaller ones like Adept AI Labs are either creating their own competing AI or packaging technology from others to give users new powers through software.

HOW IS ELON MUSK INVOLVED?

He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and the AI research being done by Telsa Inc (TSLA.O) – the electric-vehicle maker he leads.

Musk has expressed concerns about the future of AI and batted for a regulatory authority to ensure development of the technology serves public interest.

“It’s quite a dangerous technology. I fear I may have done some things to accelerate it,” he said towards the end of Tesla Inc’s (TSLA.O) Investor Day event earlier this month.

“Tesla’s doing good things in AI, I don’t know, this one stresses me out, not sure what more to say about it.”

The Future is Here: ChatGPT — AI-Powered Chatbot

ChatGPT is a state-of-the-art AI chatbot developed by OpenAI. It is powered by GPT-3.5, which has been trained on a wide variety of text sources, allowing it to generate natural, human-like text. ChatGPT was also trained using human feedback (a technique called Reinforcement Learning with Human Feedback) so that the AI learned what humans expected when they asked a question.

It is a powerful tool that can help you with tasks such as composing emails, essays, and code. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human. Large language models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence.

Can ChatGPT replace humans?

While ChatGPT is a powerful tool, it cannot replace human creativity and expression.

ChatGPT is a natural language generator that is capable of producing text that is remarkably detailed and human-like. However, it will never be able to completely replace human writers and editors, as it requires human input in order to generate original content.

ChatGPT is best used as a supplement to human creativity and expression, rather than a replacement for it.

Use both human and ChatGPT 

One major advantage of using ChatGPT is that it can help you create content quickly and efficiently. It is equipped with features such as built-in bias, invisible watermarking, and reliable accuracy, which allows it to generate natural, human-like text. Additionally, ChatGPT can help you create content that is engaging, unique, and of the highest quality. Furthermore, ChatGPT can be used to compose emails, essays, and code, making it a great tool for busy professionals and writers.

What is ChatGPT? AI technology sends schools scrambling to preserve learning

Not even two months after its creation, a new artificial intelligence (AI) technology called ChatGPT is getting banned from schools and stirring controversy among educators. 

ChatGPT, a free and easy-to-use AI search tool, hit the ground running when it was launched to the public in November. A user types in a question and ChatGPT spits back out an easily understandable answer in an essay format.

Although a huge advancement in the technology field, educators and school systems must grapple with the new tool and the challenges it introduces.

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” Jenna Lyle, a spokesperson for New York City’s Department of Education, said.

New York City and Seattle public schools have banned the use of ChatGPT from their devices and networks, citing concerns about cheating and a negative impact on learning.

How ChatGPT became popular so quickly

Adam Conner, vice president for technology Policy at the Center for American Progress, said ChatGPT became so popular quickly because it is one of the first AI technologies of its kind to be available to the public in a way the public can understand it.

“What is different about GPT is that it is generative, that it creates the kind of outputs in ways that normal human beings understand as opposed to [the technology] just kind of outputting code or data” that only a subset of the population understands, Conner said. 

Unlike other search engines, such as Google, ChatGPT can be conversational, giving human-like responses and dialogue with a user. A user can ask ChatGPT to create a resignation letter, discussion prompts for classes and even tests for students. 

Jim Chilton, CTO of Cengage Group, an education technology company, says ChatGPT can be thought of as a “virtual best friend.”

“I did this with a calculus example, ‘generate me a calculus final exam.’ Not only did it generate it, but it also was able to answer each of the problems that it gave me. It explained step by step how it solved the calculus problem, reminding me of the principles as you went through to solve the problem.”

Cheating and learning concerns

What makes ChatGPT a challenge for educators is the AI technology comes up with unique wording for answers to the same question.

For example, when asking ChatGPT “What is an apple?”, one response begins, “An apple is a fruit that grows on a tree in the rose family, and is typically round and red, green, or yellow in color.” When asked the same question again, ChatGPT starts, “An apple is a pomaceous fruit, meaning it is produced by a deciduous tree in the rose family, cultivars of the species Malus domestica.”

These varying answers, which may all be correct, make it supremely difficult for an educator to discern whether a student used ChatGPT to write an assigned essay. 

In a statement given to The Hill, an OpenAI spokesperson said the company is already working with educators to address their concerns about ChatGPT.

“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system. We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence,” the spokesperson said.

While technologies continue to be created to catch plagiarism or cheating with AI, an arguably bigger concern is students using ChatGPT and not learning the material.

“It’s worrying that they’re not learning the research skills, the critical thinking skills. I think this would be the highest concern. The reason why we have them write these papers isn’t for them to write papers. It’s to really build those skills around thinking,” Sean Glantz, a regional chapter support coordinator for the Computer Science Teachers Association (CSTA), said of students. 

ChatGPT isn’t always right

ChatGPT is a machine learning model, meaning it improves with increased interaction with users on the platform. 

ChatGPT evolves with human interactions, with its creators saying this “dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.”

As it learns, it can produce incorrect information. While a concern in some ways, this can actually be a benefit to teachers. 

Glantz, who is also a high school computer science teacher in California, says the incorrect information ChatGPT gives may help teach students to fact-check statements and learn more about the technology they are using. 

“When this thing gives us an incredibly convincing answer, and it’s totally wrong, well, ‘How did it arrive at that?’ That provides an opportunity to get into a discussion around what is the language learning model? What is artificial intelligence, right? What is machine learning?” Glantz said.

Because ChatGPT is a language learning model, the errors are also a sign that the technology is working as it should.

It is “validation of the technology and its current maturity state, and I think we will get you to see it get smarter over time, particularly as it learns and gets more material, more information, more facts for you to build its intelligence upon,” Chilton said.

Are the schools’ bans useful?

While some believe there is merit in a ban perhaps temporarily due to rapid use of ChatGPT among students, experts and teachers say the bans do not seem useful or equitable in the long term. 

Although Conner said he does believe the bans on ChatGPT have “sort of a purpose,” he said, “everybody knows it’s not a universal solution.”

One major issue with the bans, Glantz said, is “equity and access.”

When a school bans ChatGPT, they can only enforce it on school computers and WiFi. While this works for students who don’t have access to technology outside of school, many students have personal devices at home they can use to access the AI technology.

“The students that are most impacted when a piece of software like ChatGPT is banned on school computers and school WiFi, that affects the kids that only have access to technology when they’re at school, using school technology,” Glantz said.

Glantz said he has seen some students go as far as to use a WiFi hotspot in school to get around the ban.

Teaching students how to use ChatGPT is also important because this type of technology could be important for jobs in the future, so “making sure that we’re giving the students those skill sets to leverage technology is going to be really important,” Glantz said.

Maneuvering around or with ChatGPT may be the beginning of figuring out the relationship between schools and AI technology.

“The decisions going forward with how to address ChatGPT and AI in schools will have to be a responsibility that falls on the company, educators, parents and administrators,” according to Conner.

What is ChatGPT, the AI chatbot everyone’s talking about

ChatGPT’s ability to generate human-like responses has generated much buzz. Here’s what the AI-based chatbot is all about.

Artificial Intelligence (AI) research company OpenAI on Wednesday announced ChatGPT, a prototype dialogue-based AI chatbot capable of understanding natural language and responding in natural language. It has since taken the internet by storm and already crossed more than a million users in less than a week. Most users are marvelling at how intelligent the AI-powered bot sounds. Some even called it a replacement for Google, since it’s capable of giving solutions to complex problems directly – almost like a personal know-all teacher.

“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI wrote on its announcement page for ChatGPT.

What ChatGPT? 

ChatGPT is based on GPT-3.5, a language model that uses deep learning to produce human-like text. However, while the older GPT-3 model only took text prompts and tried to continue on that with its own generated text, ChatGPT is more engaging. It’s much better at generating detailed text and can even come up with poems. Another unique characteristic is memory. The bot can remember earlier comments in a conversation and recount them to the user.

But even under its beta testing phase, ChatGPT’s abilities are already quite remarkable. Aside from amusing responses like the pumpkin one above, people are already finding real-world applications and use cases for the bot.

The limitations

But while many people were in awe of the abilities of the bot, some were also quick in spotting its limitations. ChatGPT is still prone to misinformation and biases, which is something that plagued previous versions of GPT as well. The model can give incorrect answers to, say, algebraic problems.  And due to the fact that it appears so confident in its super-detailed answers, people can easily be misled into believing those are true.

OpenAI understands these flaws and has noted them down on its announcement blog: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”