We ran the first Meta-Learning & Multi-Agent Learning Workshops (MLMA) in 2020 to bring together world-renowned experts in order to advance the field of meta-learning and multi-agent learning. it was held in a hybrid form at GoodAI headquarters in Prague and online. Below you can find info about the workshop and watch the videos.
The MLMA 2020 workshop aimed to answer some critical questions:
- Can we build a general AI agent that is composed of many (homogenous) units that were meta-learned to communicate in order for the agent to learn new tasks in a continuous and open-ended manner?
- Can these units be meta-learned to communicate learned credit, form new topologies, modulate other units, grow new units, learn new motivations, and more?
- Can it lead to an agent that adapts its internal structures in order to better adapt to future challenges?
The workshop took an interdisciplinary approach drawing on the fields of; machine learning, meta-learning, artificial life, network science, dynamical systems, complexity science, collective computation, social intelligence, creativity and communication, and more. The idea for this workshop came from the type of questions we’re solving while working on our Badger Architecture.
Core areas of focus
- Can we frame policy search as a multi-agent learning problem? (Learn how can units coordinate to learn new tasks together.)
- Can we frame it as a meta-learning problem?
- Can we frame it as just another Deep Learning architecture, e.g. RNN with shared weights?
- Minimum Viable Environment = what minimalistic environment will support learning a general learning algorithm that can generalize to more complex environments interesting for humans?
- How can we add intrinsic motivation, so the agent drives itself, without explicit goals, towards open-ended development?