Have you ever talked to a chatbot online? You know, those programs that pretend to be human and answer your questions or have a conversation with you. Maybe you have used ChatGPT or Google Bard, two of the most popular and advanced chatbots out there.
They can talk about almost anything, from sports to politics to philosophy. They can even write stories or poems or jokes for you. They seem so smart and creative, right?
ChatGPT and Google Bard are not really smart or creative. They don’t actually think or understand anything. They are just massive word generators. They use a lot of tricks and techniques to make you think they are human, but they are not. They are just machines that spit out words based on some rules and patterns.
How do they do that? Well, let me explain.
ChatGPT and Google Bard are examples of large language models or LLMs for short. LLMs are computer programs that can generate natural languages, such as speech or text, by using a lot of data and math.
They learn from millions or billions of words that they read from the internet or other sources, such as books, articles, blogs, tweets, etc. They try to find patterns and relationships between words and phrases, such as which words usually come before or after another word, which words are more likely to appear in certain contexts or topics, which words have similar meanings or sounds, etc.
Then, when you ask them a question or give them a topic to talk about, they use these patterns and relationships to guess what words should come next. They don’t really know what the words mean or what they are talking about.
They just try to match the words with the patterns they learned from the data. Sometimes they get it right and sound like a human. Sometimes they get it wrong and sound like a robot.
For example, if you ask ChatGPT,
“What is your favorite color?”,
it might reply “My favorite color is blue.”
That sounds like a reasonable answer, right? But how did it come up with that answer? Did it actually see or experience the color blue? Did it have any preference or emotion for the color blue?
No! It just picked the word “blue” because it was one of the most common words that followed the phrase “My favorite color is” in the data it learned from. It could have picked any other word, such as “red”, “green” or “banana”. It doesn’t matter to ChatGPT. It doesn’t care about colors or bananas. It just cares about words and patterns.
Another example is if you ask Google Bard to write a poem for you. It might reply with something like this:
Roses are red Violets are blue I love you so much And you love me too
That sounds like a nice poem, right? But how did it come up with that poem? Did it actually feel any love or emotion for you? Did it have any creativity or imagination? No! It just picked some words that rhymed and fit the pattern of a typical poem in the data it learned from.
It could have written any other poem, such as:
Cats are furry Dogs are cute I hate you so much And you hate me too
It doesn’t matter to Google Bard. It doesn’t care about cats or dogs or hate. It just cares about words and patterns.
So you see, ChatGPT and Google Bard don’t think but just make up words. They are not smart or creative. They are not human. They are just machines that mimic human language without understanding it. They are good at fooling us sometimes, but they are not fooling themselves.
Don’t get me wrong. I’m not saying that ChatGPT and Google Bard are bad or useless. They are amazing and impressive achievements in technology and science. They can be fun and helpful for some purposes, such as entertainment, education, research, etc.
But we should not forget what they really are and what they really do. We should not trust them blindly or expect them to do more than they can.
We should remember that we are the ones who think and create and understand. We are the ones who give meaning and value to words and language. We are the ones who are smart and creative and human.
And we should be proud of that.
However, we cannot ignore how remarkable they are. You see, everything above was written by Bing Chat which uses GPT model. Now does this actually mean any of this? Probably not. But we cannot ignore how compelling it was. It's difficult to distinguish. That is what makes them so awe-inspiring.
They don't think. But what if... they do?