Nature article – Artificial intelligence finds surprising patterns in Earth’s biological mass extinctions
The idea that mass extinctions allow many new types of species to evolve is a central concept in evolution, but a new study using artificial intelligence to examine the fossil record finds this is rarely true.
Learning in Badger experts improves with episodic memory
AI agents often operate in partially observable environments, where only part of the environment state is visible at any given time. An agent in such an environment needs memory to compute effective actions from the history of its actions.
Benefits of modular approach – generalization
One of the properties of the Badger architecture is modularity: instead of using one big neural network, the Badger should be composed of many small Experts which solve the whole task in a collaborative manner.
Workshop on Collective Meta-Learning and the Benefits of Deliberation
GoodAI recently hosted a virtual workshop with a number of external collaborators in order to address some of the crucial open questions related to our Badger Architecture.
Trainability of Badger – Why is Badger so hard to train?
To understand why Badger is hard to train, we need to understand first how Badger learns, using a toy task. We try to understand the plateaus, what happens during this period, and why the plateaus are there.
GoodAI’s ToyArchitecture published in PLOS ONE
Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are often uncomputable, or lack practical implementations.
Internal Badger Workshop – Summary
We recently organized an internal workshop with a number of external collaborators to advance the progress of various challenging topics related to the Badger architecture. In this post, we would like to share the outcomes of sessions.
Distributed Evolutionary Computation on Deep Reinforcement Learning Tasks
Currently, we are experimenting with an experimental setup proposed in our Badger paper. One of the areas of explorations is an evaluation of suitability of various training settings: supervised learning, Deep Reinforcement Learning (RL), and evolutionary optimization.
Neural Networks in Unity using Native Libraries
This guide shows how to use Pytorch’s C++ API to use neural networks in Unity. We can use this with existing Python-based models, by freezing the execution trace into a binary file that is loaded by the library at runtime.
Implementation of Generative Teaching Networks for PyTorch
At GoodAI, we’re interested in multi-agent architectures that can learn to rapidly adapt to new and unseen environments we expect the behavior and adaptation to be learned through communication of homogeneous units inside a single agent, allowing for better generalization.