Meta-Learning & Multi-Agent Learning Workshop 2020

This interdisciplinary workshop will bring world-renowned experts in the AI field to Prague in order to advance the field of meta-learning and multi-agent learning. Due to the format of the workshop (focused working groups), the capacity is limited and the workshop is invitation only, but please contact us (below) if you would be interested in joining.

Oranžérie: the office of GoodAI and location of the workshop

Workshop description

In this workshop, we aim to answer some critical questions:

  • Can we build a general AI agent that is composed of many (homogenous) units that were meta-learned to communicate in order for the agent to learn new tasks in a continuous and open-ended manner?
  • Can these units be meta-learned to communicate learned credit, form new topologies, modulate other units, grow new units, learn new motivations, and more?
  • Can it lead to an agent that adapts its internal structures in order to better adapt to future challenges?

The workshop will take an interdisciplinary approach drawing on the fields of; machine learning, meta-learning, artificial life, network science, dynamical systems, complexity science, collective computation, social intelligence, creativity and communication, and more. The idea for this workshop came from the type of questions we’re solving while working on our Badger Architecture.

Core areas of focus

  1. Can we frame policy search as a multi-agent learning problem? (Learn how can units coordinate to learn new tasks together.)
  2. Can we frame it as a meta-learning problem?
  3. Can we frame it as just another Deep Learning architecture, e.g. RNN with shared weights?
  4. Minimum Viable Environment = what minimalistic environment will support learning a general learning algorithm that can generalize to more complex environments interesting for humans?
  5. How can we add intrinsic motivation, so the agent drives itself, without explicit goals, towards open-ended development?

When and where?

We’re preparing to hold the workshop in a “hybrid mode” with remote participation or physical attendance at the GoodAI headquarters in Prague (pictured above), depending on participants’ preference and the COVID situation. We’re fully prepared to run the workshop 100% remotely if the pandemic conditions demand so.

The workshop will be split into two parts:

  • First part August 10- 14: one week for all, presentations, working groups, discussions
  • Second part August 17 – September 4: extra 3 weeks of focused work (experiments), for people who can stay.

Invited participants will be reimbursed for travel and accommodation expenses during their visit.



PDF Schedule

Monday 10 August

(All times are relative to Prague – CEST) 

16:30 – 17:45: Workshop & Badger Introduction and Principia Badgerica (VIDEO) (Jan Feyereisl, GoodAI & Marek Rosa, GoodAI)

18:00 – 18:45: Related Research 1 – End to end differentiable Self organising systems (Ettore Randazzo, Google Research & Alexander Mordvintsev, Google Research)
18:45 – 19:15: Discussion on presented topic

20:00 – 20:45: Related Research 2 – Discovering Reinforcement Learning Algorithms  (Junhyuk Oh,  DeepMind)
20:45 – 21:15: Discussion on presented topic
21:15 – 22:00: Breakout Session

22:15 – 23:00: Related Research 3 – Compositional Control: Intelligence Without a Brain (VIDEO) (Deepak Pathak, CMU)
23:00 – 23:30: Discussion on presented topic


Tuesday 11 August

(All times are relative to Prague – CEST) 

10:30 – 11:15: A multi-agent perspective to AI (VIDEO) (Anuj Mahajan, University of Oxford)
11:15 – 11:45: Discussion on Multi-Agent learning

12:00 – 12:45: Multi-Agent Learning 2 (Jun Wang, UCL)
12:45 – 13:15: Discussion on Multi-Agent Learning
13:15 – 14:00: Breakout Session

15:00 – 15:45: Learned Communication (Angeliki Lazaridou, DeepMind)
15:45 – 16:15: Discussion on Communication 

16:30 – 17:45: Big Picture Discussion: Benefits of Communication & Multi-Agentness

18:00 – 18:45: Self-Play and Zero-Shot Human AI Coordination in Hanabi (VIDEO) (Jakob Foerster, The Vector Institute)
18:45 – 19:15: Discussion on Multi-Agent Learning


Wednesday 12 August

(All times are relative to Prague – CEST) 

15:00 – 15:45: Relative Overgeneralization in Distributed Control (VIDEO) (Wendelin Boehmer, University of Oxford)
15:45 – 16:15: Discussion on NMP/Graph NNs

16:30 – 17:15: Modular Meta-Learning: Learning to build up knowledge through modularity (VIDEO) (Ferran Alet, MIT)
17:15 – 17:45: Discussion on Modular Meta-Learning

18:00 – 19:15: Big Picture Discussion: Benefits of Modularity & Internal Structure

20:00 – 20:45: Modular & Compositional Computation (VIDEO) (Clemens Rosenbaum, ASAPP Inc)
20:45 – 21:15: Discussion on Modular & Compositional Computation
21:15 – 22:00: Breakout Session

22:15 – 23:00: Complexity: Concepts, Abstraction, and Analogy in Natural and Artificial Intelligence (VIDEO) (Melanie Mitchell, The Santa Fe Institute)
23:00 – 23:30: Discussion on Complexity



(All times are relative to Prague – CEST) 

10:30 – 11:15: Open-Endedness 1: Measuring growth of complexity (Tomas Mikolov, CIIRC)
11:15 – 11:45: Discussion on Open-Endedness

12:00 – 12:45: Why Think? (Nicholas Guttenberg, GoodAI & Cross Compass)
12:45 – 13:15: Discussion on Deliberation & Accessible Information
13:15 – 14:00: Breakout Session

15:00 – 15:45: Minimum Viable Environments (Julian Togelius, New York University)
15:45 – 16:15: Discussion on Minimum Viable Environments

16:30 – 17:15:  Open-Endedness 2: The Importance of Open-Endedness in AI and Machine Learning (Kenneth Stanley, OpenAI)
17:15 – 17:45: Discussion on Open-Endedness

18:00 – 19:15: Big Picture Discussion: Open-Endedness, Deliberation, and Discovery of Algorithms



(All times are relative to Prague – CEST) 

12:00 – 12:45: The Science of Deep Learning 1 (Stanislav Fort, Stanford)
12:45 – 13:15: Discussion on the Science of Deep Learning
13:15 – 14:00: Breakout Session

15:00 – 15:45: Learned Learning Algorithms (Luke Metz, Google Brain)
15:45 – 16:15: Discussion on Meta-Learning

16:30 – 17:15: The Science of Deep Learning 2, Understanding Neural Networks via Pruning (Jonathan Frankle, MIT)
17:15 – 17:45: Discussion on the Science of Deep Learning

18:00 – 19:15: Big Picture Discussion: Generalization & Scalability

20:00 – 20:45: Breakout Session
20:45 – 21:15: Working Groups – Experiments & Hypotheses
21:15 – 22:00: Workshop Conclusion

Code of conduct for the workshop

All attendees and speakers must abide by GoodAI’s Code of Conduct during the Workshop.

Recommended reading

The following is a list of recommended reading for the workshop. Items highlighted in bold demonstrate most representative related work within each topic. Items with an asterisk * denote articles whose author will be presenting or participating in the workshop. Items in italics represent articles that provide an informative summary of a particular topic or provide a comprehensive/unified view of the corresponding sub-area or its aspect. Some items are located in more than one section due to their relevance to more than one sub-topic.



Multi-Agent Learning & Emergent Communication

Modularity & Generalized Deep Learning

Gradual and Continual Learning

Inductive Biases, Symmetries and Invariances

Meta-control and Deliberation

Neural Message Passing & Graph Neural Networks

Neural Optimizers

Optimization, Adaptation and Generalization

Program Induction & Synthesis

Curriculum learning, open-endedness & Minimum Viable Environments

Further reading