ChatGPT: Everything you need to know about OpenAI’s GPT-3 tool
For years, there has been worldwide fear of artificial intelligence (AI) and its impending takeover of the world… who knew that it would start with the world of art and literature.
After months of dominating the internet with its AI image generator Dall-E 2, OpenAI is back in everyone’s social media feeds thanks to ChatGPT – a chatbot made using the company’s technology GPT-3.
It’s not exactly the catchiest name and could easily be the title of a random computer component or vague legal reference, but GPT-3 is actually the internet’s best-known language processing AI model.
So what is GPT-3 and how is it used to make ChatGPT? What is it able to do, and what in the world is a language processing AI model? Everything you need to know about OpenAI’s latest viral baby can be found below.
What is GPT-3 and ChatGPT?
GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language processing AI model developed by OpenAI. It is capable of generating human-like text and has a wide range of applications, including language translation, language modelling, and generating text for applications such as chatbots. It is one of the largest and most powerful language processing AI models to date, with 175 billion parameters.
Its most common use so far is creating ChatGPT – a highly capable chatbot. To give you a little taste of its most basic ability, we asked GPT-3’s chatbot to write its own description as you can see above. It’s a little bit boastful, but completely accurate and arguably very well written.
In less corporate terms, GPT-3 gives a user the ability to give a trained AI a wide range of worded prompts. These can be questions, requests for a piece of writing on a topic of your choosing or a huge number of other worded requests.
More like this
Above, it described itself as a language processing AI model. This simply means it is a program able to understand human language as it is spoken and written, allowing to understand the worded information it is fed, and what to spit back out.
What can it do?
With its 175 billion parameters, its hard to narrow down what GPT-3 does. The model is, as you would imagine, restricted to language. It can’t produce video, sound or images like its brother Dall-E 2, but instead has an in-depth understanding of the spoken and written word.
This gives it a pretty wide range of abilities, everything from writing poems about sentient farts and cliché rom-coms in alternate universes, through to explaining quantum mechanics in simple terms or writing full-length research papers and articles.
While it can be fun to use OpenAI’s years of research to get an AI to write bad stand-up comedy scripts or answer questions about your favourite celebrities, its power lies in its speed and understanding of complicated matters.
Where we could spend hours researching, understanding and writing an article on quantum mechanics, ChatGPT can produce a well-written alternative in seconds.
It has its limitations and its software can be easily confused if your prompt starts to become too complicated, or even if you just go down a road that becomes a little bit too niche.
Equally, it can’t deal with concepts that are too recent. World events that have occurred in the past year will be met with limited knowledge and the model can produce false or confused information occasionally.
OpenAI is also very aware of the internet and its love of making AI produce dark, harmful or biased content. Like its Dall-E image generator before, ChatGPT will stop you from asking the more inappropriate questions or for help with dangerous requests.
How does it work?
On the face of it, GPT-3’s technology is simple. It takes your requests, questions or prompts and quickly answers them. As you would imagine, the technology to do this is a lot more complicated than it sounds.
The model was trained using text databases from the internet. This included a whopping 570GB of data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet. To be even more exact, 300 billion words were fed into the system.
As a language model, it works on probability, able to guess what the next word should be in a sentence. To get to a stage where it could do this, the model went through a supervised testing stage.
Here, it was fed inputs, for example “What colour is the wood of a tree?”. The team has a correct output in mind, but that doesn’t mean it will get it right. If it gets it wrong, the team inputs the correct answer back into the system, teaching it correct answers and helping it build its knowledge.
It then goes through a second similar stage, offering multiple answers with a member of the team ranking them from best to worst, training the model on comparisons.
What sets this technology apart is that it continues to learn while guessing what the next word should be, constantly improving its understanding of prompts and questions to become the ultimate know-it-all.
Think of it as a very beefed-up, much smarter version of the autocomplete software you often see in email or writing software. You start typing a sentence and your email system offers you a suggestion of what you are going to say.
Are there any other AI language generators?
While GPT-3 has made a name for itself with its language abilities it isn’t the only artificial intelligence capable of doing this. Google’s LaMDA made headlines when a Google engineer was fired for calling it so realistic that he believed it to be sentient.
There are also plenty of other examples of this software out there created by everyone from Microsoft to Amazon and Stanford University. These have all received a lot less attention than OpenAI or Google, possibly because they don’t offer up fart jokes or headlines about sentient AI.
Most of these models are not available to the public, but OpenAI has begun opening up access to GPT-3 during its test process, and Google’s LaMDA is available to selected groups in a limited capacity for testing.
Google breaks its Chatbot down into talking, listing and imagining, providing demos of its abilities in these areas. You can ask it to imagine a world where snakes rule the world, ask it to generate a list of steps to learn to ride a unicycle, or just have a chat about the thoughts of dogs.
Where ChatGPT thrives and fails
The GPT-3 software is obviously impressive, but that doesn’t mean it is flawless. Through the ChatGPT function, you can see some of its quirks.
Most obviously, the software has a limited knowledge of the world after 2021. It isn’t aware of world leaders that came into power since 2021, and won’t be able to answer questions about recent events.
This is obviously no surprise considering the impossible task of keeping up with world events as they happen, along with then training the model on this information.
Equally, the model can generate incorrect information, getting answers wrong or misunderstanding what you are trying to ask it.
If you try and get really niche, or add too many factors to a prompt, it can become overwhelmed or ignore parts of a prompt completely.
For example, if you ask it to write a story about two people, listing their jobs, names, ages and where they live, the model can confuse these factors, randomly assigning them to the two characters.
Equally, there are a lot of factors where ChatGPT is really successful. For an AI, it has a surprisingly good understanding of ethics and morality.
When offered a list of ethical theories or situations, ChatGPT is able to offer a thoughtful response on what to do, considering legality, people’s feelings and emotions and the safety of everyone involved.
It also has the ability to keep track of the existing conversation, able to remember rules you’ve set it, or information you’ve given it earlier in the conversation.
Two areas the model has proved to be strongest are its understanding of code and its ability to compress complicated matters. ChatGPT can make an entire website layout for you, or write an easy-to-understand explanation of dark matter in a few seconds.
Where ethics and artificial intelligence meet
Artificial intelligence and ethical concerns go together like fish and chips or Batman and Robin. When you put technology like this in the hands of the public, the teams that make them are fully aware of the many limitations and concerns.
Because the system is trained largely using words from the internet, it can pick up on the internet’s biases, stereotypes and general opinions. That means you’ll occasionally find jokes or stereotypes about certain groups or political figures depending on what you ask it.
For example, when asking the system to perform stand-up comedy, it can occasionally throw in jokes about ex-politicians or groups who are often featured in comedy bits.
Equally, the models love of internet forums and articles also gives it access to fake news and conspiracy theories. These can feed into the model’s knowledge, sprinkling in facts or opinions that aren’t exactly full of truth.
In places, OpenAI has put in warnings for your prompts. Ask how to bully someone, and you’ll be told bullying is bad. Ask for a gory story, and the chat system will shut you down. The same goes for requests to teach you how to manipulate people or build dangerous weapons.
Artificially intelligent eco-systems
Artificial intelligence has been in use for years, but it is currently going through a stage of increased interest, driven by developments across the likes of Google, Meta, Microsoft and just about every big name in tech.
However, it is OpenAI which has attracted the most attention recently. The company has now made an AI image generator, a highly intelligent chatbot, and is in the process of developing Point-E – a way to create 3D models with worded prompts.
In creating, training and using these models, OpenAI and its biggest investors have poured billions into these projects. In the long-run, it could easily be a worthwhile investment, setting OpenAI up at the forefront of AI creative tools.
How Microsoft plans to use ChatGPT in future
OpenAI has had a number of big name investors in its rise to fame with names including Elon Musk, Peter Thiel, and LinkedIn co-founder Reid Hoffman. But when it comes to the ChatGPT and its real-life uses, it’s one of OpenAI’s biggest investors that will get to use it first.
Microsoft threw a massive $1 billion investment into OpenAI and now the company is looking to implement ChatGPT into its search engine Bing. Microsoft has been battling to take Google on as a search engine for years now, looking for any feature that can help it stand out.
Last year, Bing held less than 10 per cent of the world’s internet searches. While that sounds tiny, it is more testament to Google’s grip on the market with Bing standing out as one of the most popular options.
With plans to implement ChatGPT into its system, Bing is hoping to better understand users’ queries and offer a more conversational search engine.
It is currently unclear how much Microsoft plans to implement ChatGPT into Bing, however this will likely begin with stages of testing. A full implementation could risk Bing being caught up with GPT-3’s occasional bias which can really delve deep into stereotypes and politically
Read more: