Generative AI Models: ChatGPT

Search

Table of Contents

By now surely you’ve heard about ChatGPT. It took over the world with multiple use cases for everyday tasks, so users from all over the world and disciplines have been using ChatGPT since the end of 2022.

ChatGPT is a conversational bot that allows users to have human-like interactions with Artificial Intelligence (AI) in natural language. It can answer questions with mostly accurate information (depending on the topic, information can be outdated or wrong). It also can help you debug code, compose a cover letter for a job application, and perform similar tasks.

Concept and Origin

ChatGPT was created by OpenAI, a company dedicated to the research and development of AI technologies. It’s been a product in development for several years now. The first version (GPT-1) was released in 2018 and its latest release came in March 2023 when GPT-4 went live.

The idea behind ChatGPT was to create a Generative Artificial Intelligence capable of giving human-like responses to natural language inputs. For this purpose, it was pre-trained on large (really enormous) amounts of text data. Deep learning techniques were also used to learn how to write coherent sentences and appropriately use words. It was also trained to understand how humans talk (well, write) so it’s able to understand whatever the user writes as a prompt. 

It’s called “generative” because it’s capable of forming responses on its own. It doesn’t rely on a database to select the most appropriate answer; instead, ChatGPT generates its answers in a personalized way, depending on the user’s input. 

ChatGPT’s Architecture

This tool uses a deep learning model known as a Transformer. This is one of the most used models in natural language processing and computer vision. This model is designed to process sequential input data but it weighs the significance of each part of this data using a mechanism called “Self-attention”. 

In addition to the Transformer Architecture, it also includes components such as an MLP (multi-layer perceptron) classifier and a decoding mechanism. The MLP classifier is a feedforward artificial neural network used to fine-tune the model for specific natural language processing tasks.

How is ChatGPT Trained?

There are two main stages when you are training a ChatGPT-like model. The first part is the pre-training. In this stage, a large chunk of internet data is fed to the model and then, it tries to predict how to finish a sentence, given an input that is grammatically correct and semantically meaningful similar to the internet data. After the pre-training stage, the model should be able to complete sentences successfully, but little more.

The second stage would be fine-tuning the model. In this case, we’re talking about a three-part process that helps convert the pre-trained model into a model that can answer questions.

Fine-Tuning

  1. The first step is collecting training data, questions, and their answers, and fine-tuning the pre-trained model on this data. After feeding this data to the model, it’ll be able to answer a question by generating an answer that is similar to the training data.
  2. The second one is collecting even more data. This time, questions with multiple answers. Now the model should be trained with a reward model to rank these answers from most relevant to least relevant.
  3. Finally, the use of reinforcement learning to fine-tune the model so its answers are more accurate overall.

ChatGPT’s process to answer a user-entered prompt. SourceThe user accesses the platform and starts a conversation with ChatGPT. By entering a prompt in natural language, the experience begins and ChatGPT will start its process of generating an answer for that specific prompt.

Text entered by the user is sent to a content moderation component. This ensures that safety guidelines are followed and this mechanism filters inappropriate prompts. If the input is valid, then it’s sent to the ChatGPT model. If not, then it goes straight to template response generation.

When the ChatGPT model has a response ready, it’s sent to a content moderation component again. This ensures the generated response is safe, harmless, unbiased, and so on. All of this process happens in milliseconds.

If the output passes content moderation, it’s shown to the user. If not, a template answer will be given to the user.

ChatGPT’s Capabilities and Limitations

There are a lot of things you can do using ChatGPT. We’ll name some of them and also some topics that can be concerning about it.

Capabilities

One of the most common things people ask ChatGPT to do is impersonate popular figures such as presidents, artists, etc. People can ask the model, for example, to tell you in a few words or tweet format how Kobe Bryant would react to Lebron James’ breaking the record on points.

It’s a helpful tool you can use to compare two things. For example, you can ask the model if one text is a ripoff of another one by providing both of them and letting it compare them.

For programmers, it’s an excellent debugging tool, as you can input code and ask ChatGPT why your code isn’t running, or why it’s failing. Also, it can provide help to fix it. It can also explain to you what sections of code are doing, this can be useful for doing documentation of code created by you.

When coding, you can also ask it to find vulnerabilities in the code. The language model will explain to you the reasoning behind the answer so you can understand what was wrong. Another capability coders can use is ChatGPT to translate a Python program to its equivalent in another programming language.

You can ask ChatGPT to simulate a terminal, so instead of deploying a virtual machine, you have the model acting and answering the same as a VM’s terminal would. Or maybe you can ask it to simulate a virtual network environment so you can practice commands without having to build a test network for it.

While it can do a lot of non-harmful things, ChatGPT doesn’t have a real moral compass. It can also be used to create things that can be seen as harmful, racist, or sexist, for example. There will be cases in which it could be seen as insensitive by large groups of people.  

It has excellent grammar, so cyber criminals can use it to compose emails that look legitimate and then modify them with harmful internet links. As it can help users create code, this code can be malicious or used for cyber crimes and ChatGPT won’t be responsible for it as it’s unaware of the intentions of the creator.

Limitations 

Sometimes the answers given by ChatGPT will look good but are incorrect since they can be non-factual or outdated.

One thing that can happen is that ChatGPT won’t answer correctly a given input, but if you change the phrasing of the question a couple of times, then eventually the model answers correctly because it finally understands what the user meant. 

The model will often give answers that are “too long” instead of more simple ones. This “issue” comes from the training stage as data scientists often prefer longer answers that explain thoroughly the situation. ChatGPT should ask the user to clarify when he provides an ambiguous query. Instead, it tries to guess and sometimes will be mistaken about the user’s intentions.

The moderation component sometimes fails to detect and block certain types of unsafe content, so ChatGPT will never be perfect in this sense as right and wrong can vary from person to person.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Scroll to Top