Blog

What kind of AGI do we want?

November 22, 2020

By Olga Afanasjeva
GoodAI 

Interior of the Sagrada Família basilica by Antoni Gaudí in Barcelona, Spain. Photo credit: SBA73 on Flickr.

What kind of AGI do we want?

To me, this question means, what do humans value, and what do we want the world to be like in the millennia to come. What kind of universe do we want to create, and for that, do we need an AGI that is a succession of a human mind, or something qualitatively different?

Next to the notions of mathematical perfection, equilibrium, and optimality, the human mind comes across as somewhat imperfect. But the mind’s erratic nature is exactly what makes us dream and sparks our progress: we’re impulsive, curiosity-driven, looking for excitement and surprise, thrilled by our ability to discover, create and influence things in unexpected ways.

In contrast, let’s imagine a universe where everything is perfectly optimal and rational. What is an “optimal” universe? An orderly world where everything is fully predictable and perfectly balanced would be simply boring. It reminds me of a notion of the ultimate fate of the Universe where entropy has increased to the point that it’s stable and nothing new is happening. We probably don’t want an AGI which leads to the most stable, most controllable, “the safest” solution to any task.

This is of course an extreme example. What we see today is a range of approaches to AI, which naturally absorb and reflect the values (preferences, objectives) of their designers. For instance, optimization framework has been associated with economic value and performance; statistical inference – with objective knowledge and evidence; symbolic frameworks – with logic and implication. But the values per se haven’t necessarily been the conscious drive behind the design choices. 

If we want the future advanced AI to perpetuate what we as humans value deeply: curiosity, creativity, a need to be surprised, and to marvel, then let’s proactively think about such a framework that will include these values and will allow us to instill them from the outset.

In AGI development, we can set two key metrics. The first one is, how well an agent can adapt to new tasks. The second one is, how novel and creative are agent’s behaviors*.  The second metric is much fuzzier and more challenging to measure than the first but is equally important. In order to solve a complicated problem, you need to be able to redefine it, and that’s where creativity-lacking AI would fall short.

One can also look at it as a proactive versus reactive approach. What we humans value is setting goals for ourselves, taking action, and venturing into the unknown – where a measurable task isn’t a starting point, but something we are yet to uncover. 

Naturally, we would want our future minds to inherit this virtue. I imagine an AGI that, being an extension of our own minds, will bear both features: bold problem-seeking (creativity, curiosity) and calculated problem-solving (adaptation to novel tasks). With such AGI at hand, we humans will be creating an increasingly exciting, challenging universe we can marvel at.

Footnotes

* Credit to scientists Kenneth Stanley and Jeff Clune who champion creativity and curiosity-driven AI. For some of our top picks on this and related topics check out the Recommended Literature section here.

Collaborate with us

If you are interested in the work GoodAI does and would like to collaborate, check out our GoodAI Grants opportunities or our Jobs page for open positions!

For the latest from our blog sign up for our newsletter.

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions