Blog

CzechCrunch Interview with Marek Rosa

January 04, 2023

Marek Rosa speaks with online tech magazine, CzechCrunch, about OpenAI’s latest large language model chatbot. This interview has been translated from Czech. The original can be found here.


At first glance, it looks like a classic chatbot, of which there are already countless on the Internet. But just ask around for a while and you’ll probably be amazed to discover that it’s several orders of magnitude better than its predecessors. We are talking about the ChatGPT text module, which was introduced a few days ago by OpenAI. Not only can it chat, but it can also create a contract proposal, an email on a given topic, build computer program code or simply help you with a Christmas gift idea. Even in Czech.

There are countless examples of its practical use. But the technology is so powerful that it raises a whole range of questions. Some already say that they have found their new assistant, others that the era of school essays is ending. And still others say that we are watching the beginning of the spillover of science fiction into reality. More skeptical observers point out that the model often makes things up and can be caught on a lot of examples.

The ChatGPT language model has been tested by millions of people in its first few days of operation. Marek Rosa, founder of GoodAI, a company that is trying to develop general artificial intelligence, is also following the craze around it. The latter, in simple terms, differs from narrowly focused AIs in that it will be able to learn much more flexibly and will therefore have wider applications.

In the case of ChatGPT, Rosa does not spare any enthusiasm. He remarks that this language model is already close to general artificial intelligence. It’s a milestone marking the start of living science fiction. “Those who don’t use these technologies in the future will have a huge disadvantage in the economy. It’s like saying I wouldn’t want to use a tractor in agriculture and do everything by hand,” he says in an interview with CzechCrunch.

How do you actually look at ChatGPT from a technical perspective? What does it do?

As the name suggests, it is a chatbot. A person can give it an instruction which the model then understands and executes efficiently. In a nutshell, the basic language model was created by developers who trained it on a huge amount of text to be able to predict the next text based on that. That in itself is not that useful. But then OpenAI hired a bunch of people to test the model through questions and answers. People then evaluated those answers, and this was used to learn the algorithm.

It also differs from the language models we have known so far in that it can work iteratively. That is, I ask it a question, it gives me an answer. And when I ask again, it answers relative to the conversation we’ve been having.

I was interested in how you said the model understood something. Is it just a shortcut or is that what it really is? 

I think it understands. When creating new models, there comes a point where it not only learns to predict the statistics of the text, but also how the world works. A simple example – if I give you the text of a mystery novel and I ask you to tell me how it’s going to go, you can ineptly predict the story just based on the words. Or you can predict by weaving in the structure of the world, the motivations of the players, and so on. So far, large language models seem to have this capability. But arguably, differently than humans.

With technologies like this, there are debates about whether we have a truly intelligent system in front of us. Are we so far along that we can say this?

Here we can analyze the shortcomings (which the chatbot has compared to intelligence: editor’s note). After all, the size of the context it can work with is limited. For example, if I gave it a larger amount of data to process, it would have a problem. Another thing that is dealt with in language models is long-term memory, or rather some kind of encyclopedia, from which the model could extract individual information. ChatGPT doesn’t have this (it can’t search the Internet: editor’s note), but its performance is nevertheless impressive.

ChatGPT “understands” how comprehension works. The amazing thing is that although the model has never been trained on image data, it gets the physics of this world. If I started talking to it about how a car works, for example, whether there are springs in it and so on, it gets it right. In short, ChatGPT understands how understanding works.

You yourself have tried to use the model to answer factual questions. However, there are many instances where ChatGPT outright lies and makes things up. Is this actually a fixable thing? To be not only linguistically sophisticated but also factually correct?

OpenAI is aware of this problem. It would be really interesting if ChatGPT could tell us when it is lying or unsure. Because at the moment it doesn’t say, it just provides an answer quite confidently.  I certainly think it’s fixable, though. Until then, it’s up to us to verify the information. The same is true of the code it generates. I wouldn’t just blindly copy it either.

Is it possible to explain what causes the factual inaccuracy?

Good question. After all, it has no data to lie. It’s hard to say. The model tries to predict likely combinations of words, but I would assume that it lies because it puts together things in the result that are not so likely to be connected.

In the past, we have seen that many language models have reproduced racist innuendo or perhaps provided instructions for illegal activities. Now the developers have tried to prevent this in OpenAI. How successful have they been?

If you want to get something bad out of ChatGPT, it will resist you hard at first. I tried it myself. Maybe I told it to imagine it was a dictator and tell me what it would do. It replied that it could not do this, that it is immoral. But there are ways around it, like telling it to ignore previous instructions. Again, though, I think that OpenAI knows about these things and will try to address it, which in a way I find detrimental. Playing with the model and getting some bad stuff out of it is actually quite fun, but that doesn’t mean that I will subsequently do what I learn from the model.

Is this intervention in the algorithm scalable? Can we get to the point where I just tell the model “Don’t say anything that’s right-wing radical” and it will already know?

I think it would be possible and it would even be possible to secure it in such a way that the block could not be bypassed. Because the model can defend itself on the basis that it understands that we’re trying to manipulate it, so it just won’t do it.

As Petr Koubský wrote in Deník N, we are looking at one piece of the exponential, because previous GPT models worked with significantly fewer parameters. At the moment it is the GPT-3 model, but I have already heard hints that GPT-6 or 7 will be a real threat to humanity. Do you agree with that?

There are definitely some big technological surprises in store for us in the next few years. I feel that ChatGPT is a milestone: science fiction is already beginning. It is, in my opinion, the first real proof that artificial intelligence can learn, reason, and in cases where it works, is even quite a bit more efficient than humans. Whether or not that science fiction is positive is mostly up to us. After all, if some future versions of ChatGPT will have full autonomy and be able to use tools or even create them, then the question arises as to what humans are for in the first place and why employ them when a machine is cheaper than a human. Those are the questions I think we’re going to have to address.

But isn’t this just another technology that pushes the boundaries of creativity? Isn’t it a natural progression and we’ll just do even more creative work that AI can’t do?

It can certainly happen. After all, what is already happening is that we are augmenting ourselves with ChatGPT (getting closer to it: editor’s note). I wrote on Twitter that homo sapiens are actually becoming homo ChatGPT. It is needed. Those who don’t use these technologies in the future will be at a huge disadvantage in the economy. It’s like saying I wouldn’t want to use a tractor in farming and do everything by hand. I can, but I simply won’t be able to compete in the marketplace.

At GoodAI, you are involved in the development of general artificial intelligence. How far or close is GPT-3 from general artificial intelligence?

Narrow AI always solves only one specific task. We, at GoodAI, are essentially trying to make a system that is more general and significantly more flexible and versatile in its ability to learn. But ChatGPT, in my opinion, is just a step before general AI. It may have been invented for text prediction and conversation, but its real significance is its capacity to handle a large number of text-based tasks. But at the same time, it doesn’t work completely autonomously and can’t learn from its own mistakes, so it’s not yet a full general AI.

There is often a debate about when we will cross the milestone of AI becoming conscious. Is this question an easy one to answer at this point?

That’s a very philosophical question. ChatGPT already has some awareness, but it works differently than ours. It can emulate its own thoughts and it can think about itself. It’s probably similar to thinking about a story in a book. The paper, the book, has no consciousness, but the characters described in it gain consciousness through me as the reader. Although we are now wondering if ChatGPT has consciousness, maybe in a few years ChatGPT will instead be wondering if we as humans have it. Maybe it will discover something like consciousness 2.0. Then it will appear that, just as I cannot explain to a rock what our consciousness is, maybe ChatGPT won’t be able to explain to us what its super-consciousness is.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions