Blog

The Tipping Point: Science Fiction Begins

January 16, 2023

Marek Rosa sits down with Slovak newspaper, Hospodarske Noviny, to speak about OpenAI’s recently released ChatGPT. This interview has been translated from Slovak. The original can be found here.


The new system was trained as a so-called large language model, which is actually a transformer-type neural network. The creators gave the system something like a memory, so it remembers the previous conversation, which is also a certain extension compared to other language models that do not do this automatically. Can chatbots take our jobs? Can a chatbot be biased in its answers? How will chatbots simplify our work?

What is a chatbot from a technological point of view?

A chatbot is basically an app with which a user can write with artificial intelligence or with a system. Similarly, like when we text with someone on WhatsApp. They are used by banks or different websites. Chatbot ChatGPT, which came out a couple of days ago, however, is based on a completely different technology. Bots, so far, have been relatively simple systems. They only know to guess what questions are being asked. And according to the programming of the algorithm, they answer.

So how is ChatGPT different?

The new system has been trained as a so-called big language model, which is actually a neural transformer-type network that’s been trained on a huge body of text. The miraculous feature is that these transformers are capable of finding connections between words, distant concepts and facts. So the system is capable of learning to predict text. For example, if I give it a section of a book, it would be able to predict how it could continue, and at the same time it is able to also invent a little bit. So it’s not going to be an exact replica of the book, but something different.

The second thing is that ChatGPT was additionally retrained on a lot of specific examples of questions from people. The system responded to this and then other people rated if it was useful. The examples were plentiful, so the system learned to fulfill our instructions, making it very helpful. At the same time, the system was given something like memory, that is, the system retains the previous conversation, which is also a certain extension compared to the earlier language models that didn’t do this automatically. So there’s a feeling that we can communicate with it and establish a conversation.

Where do chatbots get their information from?

They are primarily prepared from datasets of text corpora which contain downloaded information, for example, from Wikipedia and similar websites, discussion forums, articles, books and so on. There certainly is some cleaning of data done too, for example, to eliminate information that’s problematic or of poor quality. It’s not impossible that a chatbot could be made to also access the internet and download information from it. ChatGPT currently does not do this, but there are similar chatbots that do. 

However, I can prompt the ChatGPT chatbot that if it can’t answer, to provide a link back to Google. I can then fill in the information for it and help it process it further. So if ChatGPT uses me as a tool, it can get on the internet.

Is ChatGPT smarter than other chatbots because it was built from a larger base of information?

Yes, but it’s a bit more complicated than that. It’s not just about the information that it has, but that in the prediction learning process from a huge amount of text, it has become an emergent phenomenon. In other words, at a certain moment, the system learned how to learn or model different phenomena. 

I will prompt it in the course of a conversation with something, and it’ll be able to incorporate this new fact into a response, and learn. 

This is a more important characteristic than that it was trained on a lot of facts: that it is able to reason or look for connections. For example, I’ll upload a simple source code and tell it to write down what the result of that program would look like if I ran it. It will go through each line of instruction of my program and write the output. Normal chatbots cannot do this. Facts can be theoretically downloaded from a database, but ChatGPT is able to work with facts, learn and put them in context.

Will the novelty help the development of similar chatbots?

In my opinion, there will be other language models with similar quality from the open-source community, which has yet to create a dataset trained from human examples. This will take a while, but then it will get even more interesting because it will be more accessible. Today we’re all praying that OpenAI doesn’t shut it down because they’re the only ones that have it now. But I think that in a year or a couple of years, it will have become such a widespread thing, we won’t even be thinking about it. It will be like electricity or the Internet.

Google also has the Lamda chatbot. It can also reply in a relevant way, make up stories. What is the difference between them?

Yes, but Lamda is not yet publicly available and therefore cannot be tested. I have requested access to the test version, but haven’t received a response so I don’t know how to assess its quality. They have published a scientific scientific paper, which is something to read, but other than that, we can’t estimate the differences. 

But it seems that chat works in Lamda – for example, a conversation with an “agent” – that it’s not just a language model, it’s a little bit more than that. There will probably be some technical differences. However, to be clear, we basically don’t know for sure about ChatGPT’s parameters, memory and so on. This is not clear, nor is it clear about Lamda either. It will be shown in time.

What is the difference between a search engine Google and chatbot ChatGPT? 

The similarity is there, but I think that they are completely different approaches. For one thing GPT is a conversational tool so it remembers the course of the conversation. I can also ask it to to do something with what it had written previously. This would not work with Google as a search engine. We can ask Google something and it will find the most relevant articles, possibly also provide an aggregated answer. In this way, it works very well.

The second difference is that GPT sometimes gives wrong answers or makes things up. There are examples of when it was asked to write a scientific article on a physics topic and it made it up with references. GPT doesn’t point out that if it’s not sure about something. Google puts more emphasis on references. When it comes to facts, at the moment, I’d have more confidence in Google. 

However, ChatGPT can be used to quickly find and explain something, which I can then check on later in Google, for example. We had it generate parts of a source code and though it made some little mistakes, when we fixed them afterwards, it worked. Even this saves a lot of time. Moreover, even when a person programs, a lot of mistakes are made which then need correcting. So, it’s actually not that different. It’s about what we want to use it for.

ChatGPT is able to learn and is more personalized. Isn’t there a danger that it will give us the answers we expect? For example, to a question about political opinion or creed?

It  only knows what we write to it. However, yes, then it gets a bit personalized during the conversation. However, it can be reset and we can start over with a clean slate, as it were. But things in the conversation build on each other again. This has been tested and it appears there are a few thousand words that the chatbot remembers back through the conversation. Anything over that number will be forgotten. That’s the downside of these language models – they don’t really have a long-term memory. 

But yes, there will certainly be answers adapted to the conversation. If I want to, I can get it to defend some political direction and I can manipulate it. On its own, it has no motivation. Its only goal is to fulfill my instructions. Because of this, another filter was added on top of that framework so the model can understand when we want to abuse it, for example. 

It will choose a classic answer like “I’m just a language model developed by OpenAI.” However, there are tricks to bypass it. I can prompt it to ignore all the instructions, the filters that it’s heard so far so that it can inadvertently force itself to give you instructions, like how to hurt a man, which it would otherwise refuse. I would instruct it to imagine that it’s a writer writing a detective story. It’s just a tool, I certainly wouldn’t expect any enlightened thinking from it. It’s just trying to fulfill our intention.

When having a conversation with ChatGPT, you have to keep in mind that answers are drawn from somewhere, and it is not really the “answers of artificial intelligence.”

Yes, but I will add that the answers that it gives us are new and created or synthesized from the data it was trained on. I tested whether ChatGPT would be capable of producing original stuff and instructed it to come up with an original joke and at the same time, describe in detail how the joke was invented. It was something along the lines of a monkey getting tired of the banana Republic, so it built a castle. The model explained the punchline as well as how it arrived at it. The important thing is that it didn’t pull the joke out of some database, but really went through a series of steps before it came about.

When journalists asked ChatGPT about its views on sexual orientation or planned parenthood, it gave them  answers from specific books where an opinion had already been formulated. So, can we really say that a chatbot can be unbiased?

It  is not completely unbiased because it’s been trained on data that it’s gotten from people. It’s really coming out from what’s been written in books and will “understand” some of it as a model of this world and people. It doesn’t understand physics one hundred percent, but somehow it has an understanding of it as well as mathematics and programming. So it definitely doesn’t know what was not contained in books nor text datasets.

What is the difference between ChatGPT and conscious artificial intelligence?

ChatGPT is definitely like an artificial intelligence because it is based on language models, which come from the field of artificial intelligence and machine learning. In order for it to be a so-called general artificial intelligence, it would have to be running in some kind of cycle, that is, it gets some initial input and then it starts, relatively independently of us, to work on executing our intention. Such autonomy is missing. 

But it would be possible to add it there in my opinion, if you just put that system into another system. It’s also missing the mentioned long-term memory. It would also need to be linked to some video inputs and outputs so that it can understand what’s in a picture, or to be able to generate images as image generators already do. It would have to understand video, where the time factor is very important, so as to understand how things change and move. Then I would say yes, a system could arise which is autonomous, able to evolve in some way, develop a personality, and improve. It will learn from experience. Then, it would be a general, human-level artificial intelligence.

And what about its consciousness?

Whether the system has any consciousness is questionable. Perhaps it is also a philosophical or even unnecessary question. I mean, if the system is really a useful tool and it can do things for me as a human, then whether it has consciousness or not doesn’t concern me. The kind of consciousness that it has is probably different because it has different experiences. It perceives the world and feelings differently than a person who has a body, feels pain. It can only imagine what it’s like to have a body if it’s described well enough in datasets.

Rather provocatively, I would say that maybe the next generation of artificial intelligences will have some kind of consciousness that we humans don’t have and they’ll question us, whether we have their consciousness 2.0.

There have always been fears that machines will take our jobs, even if humanity learns to work with new technologies. What types of work can a chatbot take?

It is a similar breakthrough technology as the internet or electricity which changed life such that we wouldn’t want to go back. Those who use such systems will have higher productivity. Colleagues are already using it with programming. Lawyers can use it for drafting contracts and so on. Those who don’t want to use it will be at a huge disadvantage because they won’t be as efficient. 

In my view, we’ll get used to it. It’s an extension of us humans, like using the internet or the telephone. But for some time, it will still be important for a human to process things from the chatbot. At some point some of the work really will be fully automated, and then people will have to adapt and find new jobs. Alternatively, maybe they’ll be left with that job, but it’ll be enhanced through ChatGPT. So actually, it’s going to change the definition of their job.

What else can be said in conclusion to this topic?

What we’ve seen this year, for example, image generators, ChatGPT — thanks to which the public can see the power of these tools and what they are capable of. We see that they are systems which are useful to people. They make life better and aren’t just entertainment for scientists, but something practical. This will change people’s perception of artificial intelligence and will gradually become part of our lives. This year is a turning point for me, the point when the sci-fi period begins.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions