Blog

HALLM: An Agent that Observes and Acts through a Python Terminal

August 24, 2023

At GoodAI, we are deeply committed to the advancement of safe AGI. Large language models (LLMs) undoubtedly offer significant power, but on their own, they have limitations — notably, the inability to learn new skills post-deployment. It’s here that our innovative approach shines. We’ve designed agents that not only harness the foundational capabilities of LLMs but also significantly expand upon them. Through our unique architecture and novel methods, our agents imbue LLMs with the ability for continual learning, enabling them to understand complex instructions, adapt over time, and excel at intricate reasoning and problem-solving tasks.

HALLM can reach the user to ask for more information or if it thinks that the user can help it out with something, like installing a Python package or rebooting the system. In this video above, HALLM uses the built-in function `input` to ask the user to suggest a topic for a haiku it is writing.

In this video, we show a scenario in which HALLM is presented with an unknown class object to interact with and a task to accomplish afterward. The initial query from the user is given disguised as a comment (green). After that, the LLM starts interacting with the Python terminal with that goal in “mind”. In yellow, we see the lines that have been executed by HALLM in the terminal, and blue text represents the output of those instructions. The Agent quickly reacts to finding unexpected information or errors and finds its way to achieving the goal set by the user. When calling the LLM, all yellow text is tagged as “assistant” and all blue text as “user”, enforcing this way the illusion that the LLM is actually interacting with a real Python terminal.

Code: GitHub (open-source)

We envisioned an agent with the prowess to execute code, interact seamlessly with various system components, and augment its abilities through existing software, other online resources, and external references.

However, the initial test stages were marked by a series of challenges stemming from inherent aspects of LLMs:

  • One-shot thinking
    • Having been trained to be helpful assistants, current LLMs always try to provide the user with an answer right away, and they often fail miserably when the answer requires elaboration or the instructions are unclear.
  • Lack of initiative.
    • When presented with incomplete instructions or when the agent hits a dead end, LLMs are usually very reluctant to identify the missing pieces of information, ask the user for help, or investigate the issue. Instead, their preferred tool is…
  • Hallucinations.
    • We found that LLMs tended to come up with invented results or user responses every time they had to work with unknown information or wait for a result to be ready. This was a direct consequence of the innate longing of LLMs to generate immediate and full solutions.

We nevertheless had an idea to get around this and steer the LLM towards a more exploratory and dynamic state of mind: to make it believe that it was interacting with a Python terminal. We expected that the terminal environment would incentivize the LLM to think and act the way people usually do with terminals, and Python sounded like a good candidate for a well-represented and easy-to-work-with language.

This is how we ended up developing this prototype, which became the baseline for our agents from that moment onwards.

In the prototype, the LLM is prompted to generate Python code that looks exactly like the code that you can find in a Python terminal (in which every line starts with “>>>” or “…”). 

We then take those lines of code and execute them line-by-line in a simulated but feature-complete Python terminal. Every time the terminal responds with any kind of output (printed text, unhandled exception, syntax errors, etc.) the line-by-line execution of code is interrupted and the output is forwarded to the LLM

Importantly, we also discard the non-executed lines from the LLM response. This results in a conversation between the LLM and the terminal, in which the LLM generates code and the terminal runs it, and which supports a strong illusion of a line-by-line interaction from the point of view of the LLM.

Thanks to this illusion, the agent is made aware of its mistakes and incentivized to learn from them, at least as much as the LLM context can hold. Apart from this, the agent can communicate with the user to require additional information, using the built-in `input` function, and it can signal that it has achieved the goal of the task by calling done(). At this point, the control is given back to the user, whose messages will be shown as comments to the agent.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions