GoodAI CEO, Marek Rosa, speaks with Forbes about the latest wave of AI-enabled content generation tools. This interview has been translated from Slovak. The original can be found here.
Why does a monkey want to build a castle? Because it’s tired of living in a banana republic! We start unconventionally – with a joke created by artificial intelligence. Over the summer, it showed through Midjourney that it could paint, and more recently, has written essays and advertising texts through the ChatGPT app, even in Slovak.
Forbes asked Marek Rosa, a Slovak developer and researcher whose company GoodAI is working on the development of general artificial intelligence, how he perceives the new popular services.
Do you think services like Midjourney or ChatGPT have brought a turning point in the general public’s perception of AI this year?
Yes, I agree with this. 2022 has been a turning point for AI. Up until now it’s been about scientists doing demos, but it hasn’t been outwardly visible what use it could have in everyday life.
Midjourney, Stable Diffusion and now ChatGPT have shown the general public what AI is capable of and that it can compete with experts with its capabilities. Another benefit is that many more people will now start to take an interest in AI, which will speed up development.
It doesn’t need guiding
Do these services represent something groundbreaking for you as an expert? Have they surprised you in any way?
I was surprised by both systems. The quality of their predecessors hasn’t been great and it was difficult to see how more data and training could improve it.
The phenomenon in recent days has been ChatGPT. American Forbes had this AI write a college essay, marketers on Linkedin showed how well it writes advertising texts, even in Slovak. And AI-generated poetry (which ChatGPT refuses to label as poetry) is also emerging. What caught your attention?
There are several things that fascinate me about ChatGPT. Especially the way it comprehends user intent accurately, almost as if it’s reading my mind. For example, I entered a simple command in English: “create an xml file in Space Engineers that defines a blueprint of a spaceship in the shape of a cross, three blocks per side.”
And indeed, it did generate the file. I just needed to fix the formatting a bit, but I was able to upload it to the game and run it. It knew what I wanted from the very brief prompt. It didn’t get lost. It didn’t need additional explanation. If I compare it to giving a similar task to a random person, they wouldn’t know how to begin.
It’s also incredible because it has a better sense of the code in our game than I do. It seemed to have at least some idea of the space as it was able to transform the “cross-shaped” assignment into the xml code needed to do so. This is not a trivial thing.
From this I conclude that it has knowledge not only in breadth, but also in depth. Since we are dealing here with technical details of a game that maybe only a few thousand people in the world know.
A bit of a scary finding
And something more mundane that doesn’t involve programming?
I’m intrigued that it can retain the context of our conversation, giving the impression that it has a memory. As well, it knows several languages. Besides English, I’ve seen Slovak, Czech, Croatian.
What probably surprised me the most was that the entire model of this AI is somewhere between 5 and 500 gigabytes in size. That’s an amazing compression for such a vast amount of differing knowledge about the world, the ability to learn, to think, to invent.
It’s also a little bit scary because one realizes that “consciousness” doesn’t need more than those few gigabytes, that is, it fits on an average memory card.
Does ChatGPT have anything to do with the general AI you are developing at GoodAI?
It’s very close to it, in fact it may be just a few steps away. Namely, the visual input and eventually the output to understand the world as we perceive it. The ability to control tools, to not just be a closed chatbot, but to be able to run programs, to control bots remotely. Long-term memory, to be able to pursue really long-term goals. And an agenda, that is, to have some sort of goal, to not just be a chatbot.
Why I think ChatGPT is close to general AI is because the model was trained only to predict the most likely text. While this has been a specific task, it has had to independently learn many other things to perform tasks correctly, which in effect makes it a very general system.
How do you use ChatGPT in your company?
So far, we have been “playing.” We’ve been trying to understand its possibilities and limits. However, for the last two years we’ve been working on an AI Game, which is a game where the behavior, dialogues and thoughts of individual game agents are emulated by large language models (something like ChatGPT). So, we have experience with this area. At the same time, it’s a well-trodden path. There have been great results and progress in terms of achieving general AI.
You wrote on Twitter that it would be nice to reduce the cases where the AI is unwilling to respond, which is apparently a safeguard against abuse. Wouldn’t that open the way to unethical exploitation?
I understand that unwillingness to answer is a protection, but it seems to me that the system uses it even in cases where it is not necessary. I hope this will be modified. Also, because people have already learned to work around it.
On the other hand, the dangers are definitely there – political chatbots will evolve through it and create a flood of marketing content. But I believe the positive potential prevails.
What do you think are the main shortcomings of ChatGPT?
I already mentioned some of them in the passage about general AI. But I’ll summarize it: It could use more context (memory), the ability to use tools (internet, computer, programs), the ability to perceive images and video, and to generate images.
There’s also the problem that ChatGPT sometimes makes up facts when it doesn’t have knowledge about a field. Additionally, although it sometimes refuses to give an answer to certain questions, if a prompt is rephrased, it will provide a response.
The next step? Maybe a video of the text
I was talking about Midjourney some time ago with artist Dod Dobrik, who has been working with the program since it became available, and he was telling me that in just a quarter of a year, it has made incredible progress. How quickly do platforms like this learn and evolve? What can we expect from ChatGPT a year from now?
It’s impossible to predict what exactly will be a year from now, but I’m sure it will be something groundbreaking, something that will surprise us, even as we’re already seeing ChatGPT. In fact, this tool will start to be used for actual development.
If I had to guess, we’ll see text-to-video systems on a similar level to Midjourney. I saw something along those lines earlier this year. Possibly such a system will be able to edit video through interaction, similar to ChatGPT.
Where are ChatGPT’s limits? Do you think AI will be able to come up with a good joke?
One of the first things I asked ChatGPT to do was to come up with a joke and it did. I was curious to see how it would handle a discovery and creation process. The prompt was that it was a comedian and had to come up with an original joke and describe the process step by step.
What was the joke?
I asked it to make an original joke about a monkey and a castle and to explain how it came up with the idea. The result was this: “Why does the monkey want to build a castle? Because it was tired of living in a banana republic!” The explanation of the creative process made sense too. It wasn’t just pulled out of some database (I couldn’t find it online).
Responds to instructions, has no agenda of its own
I asked if it was going to put me out of a job as a journalist. AI assured me that this would not happen and that its goal is to help me and make my job easier. I don’t know if I want to trust it. Do you think AI will be capable of self-criticism and negative statements about itself?
It will be able to do that – after all, you can try it directly in ChatGPT. The system is nothing more than a statistical model of the texts it has been trained on, and at the same time, it is trained to respond to the user’s instructions. It has no agenda of its own. However, it does have the ability to emulate the motivations of the characters it writes about.
If AI were to come up with three new laws of robotics – what do you think would be important for it?
I think ChatGPT would come up with anything it was asked of right now. It has no agenda or goals of its own. But at the same time, with the right prompt, it can be brought closer to what we want. In any case, I tried asking it to provide new alternatives to Asimov’s Three Laws of Robotics. They made sense and were also directed towards the interests of humanity.
The age of sci-fi is here
By the way, two of the questions I’ve asked you in this interview were devised by the ChatGPT AI (although I edited the language a bit). Can identify which ones they were?
I’m thinking about it, but I really don’t know.
They were the questions about a good joke and the three laws of robotics.
Speaking of those questions, I would like to conclude with one more small reflection. ChatGPT seems to be able to learn the connections between an incredible number of facts – it can think about them, model them – provided with the right question, it will find the answer.
Surely there is new knowledge that we humans don’t yet know. I wonder if, thanks to original questions, we would be able to get answers from ChatGPT about previously unknown connections. The follow-up questions then are where are the limits of ChatGPT, what questions can’t it find answers for, and why. In any case, the age of science fiction has begun!
Leave a comment