MLMA Workshop 2020

We ran the first Meta-Learning & Multi-Agent Learning Workshops (MLMA) in 2020 to bring together world-renowned experts in order to advance the field of meta-learning and multi-agent learning. it was held in a hybrid form at GoodAI headquarters in Prague and online. Below you can find info about the workshop and watch the videos.

The GoodAI team taking part in the Meta-learning & Multi-agent Learning Workshop

The MLMA 2020 workshop aimed to answer some critical questions:

  • Can we build a general AI agent that is composed of many (homogenous) units that were meta-learned to communicate in order for the agent to learn new tasks in a continuous and open-ended manner?
  • Can these units be meta-learned to communicate learned credit, form new topologies, modulate other units, grow new units, learn new motivations, and more?
  • Can it lead to an agent that adapts its internal structures in order to better adapt to future challenges?

The workshop took an interdisciplinary approach drawing on the fields of; machine learning, meta-learning, artificial life, network science, dynamical systems, complexity science, collective computation, social intelligence, creativity and communication, and more. The idea for this workshop came from the type of questions we’re solving while working on our Badger Architecture.

Core areas of focus

  1. Can we frame policy search as a multi-agent learning problem? (Learn how can units coordinate to learn new tasks together.)
  2. Can we frame it as a meta-learning problem?
  3. Can we frame it as just another Deep Learning architecture, e.g. RNN with shared weights?
  4. Minimum Viable Environment = what minimalistic environment will support learning a general learning algorithm that can generalize to more complex environments interesting for humans?
  5. How can we add intrinsic motivation, so the agent drives itself, without explicit goals, towards open-ended development?

Full list of speakers and talks

  • Jan Feyereisl and Marek Rosa (GoodAI). Meta-Learning and Multi-Agent Learning Workshop and Badger Introduction.
  • Ettore Randazzo and Alexander Mordvintsev (Google Research). End to end differentiable Self organising systems 
  • Junhyuk Oh (DeepMind). Discovering Reinforcement Learning Algorithms 
  • Deepak Pathak (CMU). Intelligence Without a Brain 
  • Anuj Mahajan (University of Oxford). A multi-agent perspective to AI 
  • Jun Wang (UCL). Multi-Agent Learning
  • Angeliki Lazaridou (DeepMind). Learned Communication 
  • Jakob Foerster (The Vector Institute). Self-Play and Zero-Shot Human AI Coordination in Hanabi
  • Wendelin Boehmer (TU DELFT). Relative Overgeneralization in Distributed Control
  • Ferran Alet (MIT CSAIL). Modular Meta-Learning: Learning to build up knowledge through modularity 
  • Clemens Rosenbaum (ASAPP Inc). Modular & Compositional Computation
  • Melanie Mitchell (The Santa Fe Institute). Complexity: Concepts, Abstraction, and Analogy in Natural and Artificial Intelligence
  • Tomas Mikolov, (CTU). Measuring growth of complexity 
  • Nicholas Guttenberg (GoodAI & Cross Compass). Why Think? 
  • Julian Togelius (New York University). AI Learning Environments and PCG and Open-endedness and…the Extended Mind?
  • Kenneth Stanley (OpenAI). The Importance of Open-Endedness in AI and Machine Learning
  • Stanislav Fort (Stanford). Towards the Science of Deep Learning – The Loss Landscape Geometry
  • Luke Metz (Google Brain). Learned Learning Algorithms 
  • Jonathan Frankle (MIT). Understanding Neural Networks via Pruning 

Videos from the workshop

Jan Feyereisl and Marek Rosa (GoodAI). Meta-Learning and Multi-Agent Learning Workshop and Badger Introduction.

Luke Metz (Google Brain). Learned Learning Algorithms 

Deepak Pathak (CMU). Intelligence Without a Brain 

Anuj Mahajan (University of Oxford). A multi-agent perspective to AI 

Jakob Foerster (The Vector Institute). Self-Play and Zero-Shot Human AI Coordination in Hanabi

Wendelin Boehmer (TU DELFT). Relative Overgeneralization in Distributed Control

Ferran Alet (MIT CSAIL). Modular Meta-Learning: Learning to build up knowledge through modularity 

Clemens Rosenbaum (ASAPP Inc). Modular & Compositional Computation

Melanie Mitchell (The Santa Fe Institute). Complexity: Concepts, Abstraction, and Analogy in Natural and Artificial Intelligence

Nicholas Guttenberg (GoodAI & Cross Compass). Why Think? 

Jonathan Frankle (MIT) Understanding Neural Networks via Pruning 

Kenneth Stanley (OpenAI). The Importance of Open-Endedness in AI and Machine Learning

Stanislav Fort (Stanford). Towards the Science of Deep Learning – The Loss Landscape Geometry

When and where?

The workshop was split into two parts:

  • First part August 10- 14: one week for all, presentations, working groups, discussions
  • Second part August 17 – September 4: extra 3 weeks of focused work (experiments), for people who can stay.

Full list of talks and videos

Day 1

  • Workshop & Badger Introduction and Principia Badgerica (VIDEO) (Jan Feyereisl, GoodAI & Marek Rosa, GoodAI)
  • Related Research 1 – End to end differentiable Self organising systems (Ettore Randazzo, Google Research & Alexander Mordvintsev, Google Research)
  • Related Research 2 – Discovering Reinforcement Learning Algorithms  (Junhyuk Oh,  DeepMind)
  • Related Research 3 – Compositional Control: Intelligence Without a Brain (VIDEO) (Deepak Pathak, CMU

Day 2

  • A multi-agent perspective to AI (VIDEO) (Anuj Mahajan, University of Oxford)
  • Multi-Agent Learning 2 (Jun Wang, UCL)
  • Learned Communication (Angeliki Lazaridou, DeepMind)
  • Big Picture Discussion: Benefits of Communication & Multi-Agentness
  • Self-Play and Zero-Shot Human AI Coordination in Hanabi (VIDEO) (Jakob Foerster, The Vector Institute)

Day 3

  • Relative Overgeneralization in Distributed Control (VIDEO) (Wendelin Boehmer, University of Oxford)
  • Modular Meta-Learning: Learning to build up knowledge through modularity (VIDEO) (Ferran Alet, MIT)
  • Big Picture Discussion: Benefits of Modularity & Internal Structure
  • Modular & Compositional Computation (VIDEO) (Clemens Rosenbaum, ASAPP Inc)
  • Complexity: Concepts, Abstraction, and Analogy in Natural and Artificial Intelligence (VIDEO) (Melanie Mitchell, The Santa Fe Institute)

Day 4

  • Open-Endedness 1: Measuring growth of complexity (Tomas Mikolov, CIIRC)
  • Why Think? (VIDEO) (Nicholas Guttenberg, GoodAI & Cross Compass)
  • Minimum Viable Environments (VIDEO) (Julian Togelius, New York University)
  • Open-Endedness 2: The Importance of Open-Endedness in AI and Machine Learning (VIDEO) (Kenneth Stanley, OpenAI)
  • Big Picture Discussion: Open-Endedness, Deliberation, and Discovery of Algorithms

Day 5

  • The Science of Deep Learning 1 (VIDEO) (Stanislav Fort, Stanford)
  • Learned Learning Algorithms (VIDEO) (Luke Metz, Google Brain)
  • The Science of Deep Learning 2, Understanding Neural Networks via Pruning (Jonathan Frankle, MIT)
  • Big Picture Discussion: Generalization & Scalability

Code of conduct for the workshop

All attendees and speakers must abide by GoodAI’s Code of Conduct during the Workshop.

Recommended reading

The following is a list of recommended reading for the workshop. Items highlighted in bold demonstrate most representative related work within each topic. Items with an asterisk * denote articles whose author will be presenting or participating in the workshop. Items in italics represent articles that provide an informative summary of a particular topic or provide a comprehensive/unified view of the corresponding sub-area or its aspect. Some items are located in more than one section due to their relevance to more than one sub-topic.

Badger
Meta-learning
Multi-Agent Learning & Emergent Communication
Modularity & Generalized Deep Learning
Gradual and Continual Learning
Inductive Biases, Symmetries and Invariances
Meta-control and Deliberation
Neural Message Passing & Graph Neural Networks
Neural Optimizers
Optimization, Adaptation and Generalization
Program Induction & Synthesis
Curriculum learning, open-endedness & Minimum Viable Environments
Further reading