Self-improving artificial intelligence that can learn new tasks from small amounts of data is a crucial step for the advancement of strong artificial intelligence.
Current artificial intelligence is limited in its scope and is far from human-level intelligence. One of the key components missing is learning to pursue multiple goals, ones that are dynamic, changing, and that depend on knowledge acquired from previous tasks.
If we are to develop artificial intelligence (AI) capable of learning as humans do, it needs to be tested in complex environments just like humans are.
The idea that mass extinctions allow many new types of species to evolve is a central concept in evolution, but a new study using artificial intelligence to examine the fossil record finds this is rarely true.
AI agents often operate in partially observable environments, where only part of the environment state is visible at any given time. An agent in such an environment needs memory to compute effective actions from the history of its actions.
One of the properties of the Badger architecture is modularity: instead of using one big neural network, the Badger should be composed of many small Experts which solve the whole task in a collaborative manner.
GoodAI recently hosted a virtual workshop with a number of external collaborators in order to address some of the crucial open questions related to our Badger Architecture.
To understand why Badger is hard to train, we need to understand first how Badger learns, using a toy task. We try to understand the plateaus, what happens during this period, and why the plateaus are there.
Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are often uncomputable, or lack practical implementations.