This interdisciplinary workshop will bring world-renowned experts in the AI field to Prague in order to advance the field of meta-learning and multi-agent learning. Due to the format of the workshop (focused working groups), the capacity is limited and the workshop is invitation only, but please contact us (below) if you would be interested in joining.
In this workshop, we aim to answer some critical questions:
- Can we build a general AI agent that is composed of many (homogenous) units that were meta-learned to communicate in order for the agent to learn new tasks in a continuous and open-ended manner?
- Can these units be meta-learned to communicate learned credit, form new topologies, modulate other units, grow new units, learn new motivations, and more?
- Can it lead to an agent that adapts its internal structures in order to better adapt to future challenges?
The workshop will take an interdisciplinary approach drawing on the fields of; machine learning, meta-learning, artificial life, network science, dynamical systems, complexity science, collective computation, social intelligence, creativity and communication, and more. The idea for this workshop came from the type of questions we’re solving while working on our Badger Architecture.
Core areas of focus
- Can we frame policy search as a multi-agent learning problem? (Learn how can units coordinate to learn new tasks together.)
- Can we frame it as a meta-learning problem?
- Can we frame it as just another Deep Learning architecture, e.g. RNN with shared weights?
- Minimum Viable Environment = what minimalistic environment will support learning a general learning algorithm that can generalize to more complex environments interesting for humans?
- How can we add intrinsic motivation, so the agent drives itself, without explicit goals, towards open-ended development?
When and where?
We’re preparing to hold the workshop in a “hybrid mode” with remote participation or physical attendance at the GoodAI headquarters in Prague (pictured above), depending on participants’ preference and the COVID situation. We’re fully prepared to run the workshop 100% remotely if the pandemic conditions demand so.
The workshop will be split into two parts:
- First part August 10- 15: one week for all, presentations, working groups, discussions
- Second part August 16 – September 5: extra 3 weeks of focused work (experiments), for people who can stay.
Invited participants will be reimbursed for travel and accommodation expenses during their visit. Click the button below to reach out to us if you would be interested in attending.
- Badger paper: BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent Communication)
- Recent badger blog posts: here and here.
- Doing more with less: https://cocosci.princeton.edu/papers/doing-more-with-less.pdf
- Meta-learners’ learning dynamics: https://arxiv.org/abs/1905.01320
- Understanding and correcting pathologies: https://arxiv.org/abs/1810.10180
- Meta-learning of Sequential Strategies: https://arxiv.org/abs/1905.03030
- Learning to Learn without Gradient Descent by Gradient Descent: https://arxiv.org/abs/1611.03824
- Improving Generalization in Meta Reinforcement Learning using Learned Objectives: https://arxiv.org/abs/1910.04098
- Emergence of language: https://arxiv.org/abs/1703.04908
- MADDPG: https://arxiv.org/abs/1706.02275
- CommNet: https://arxiv.org/pdf/1605.07736.pdf
- ATOC: https://arxiv.org/abs/1805.07733
- Decentralized Multi-Agent Actor-Critic with Generative Inference: https://arxiv.org/abs/1910.03058
- MAVEN: Multi-Agent Variational Exploration https://deepai.org/publication/maven-multi-agent-variational-exploration
- Recurrent Independent Mechanisms: https://arxiv.org/abs/1909.10893
- Multi-agent actor centralized-critic with communication: https://www.sciencedirect.com/science/article/abs/pii/S0925231220301314
- Learning attentional communication for multi-agent cooperation: https://papers.nips.cc/paper/7956-learning-attentional-communication-for-multi-agent-cooperation.pdf
- Learning to communicate with deep multi-agent reinforcement learning: http://papers.nips.cc/paper/6042-learning-to-communicate-with-deep-multi-agent-reinforcement-learning.pdf
- Learning with opponent-learning awareness: https://dl.acm.org/ft_gateway.cfm?id=3237408&type=pdf
- Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level: https://arxiv.org/abs/1912.07589
Curriculum learning, open-endedness
- Object-Oriented Curriculum Generation for Reinforcement Learning: https://www.researchgate.net/publication/323738119_Object-Oriented_Curriculum_Generation_for_Reinforcement_Learning
- Evolving Structures in Complex Systems: https://arxiv.org/abs/1911.01086
- Enhanced POET: https://arxiv.org/abs/2003.08536
- GTN: https://arxiv.org/abs/1912.07768
- CommAI: Evaluating the first steps towards a useful general AI: https://arxiv.org/abs/1701.08954
- Pommerman: A multi-agent playground: https://arxiv.org/abs/1809.07124
- The Early Phase of Neural Network Training: https://arxiv.org/abs/2002.10365
- AI-GAs: https://arxiv.org/abs/1905.10985
- BiCNET https://arxiv.org/abs/1703.10069
- LSC: https://arxiv.org/pdf/2002.04235.pdf
- Graph Convolutional RL https://arxiv.org/pdf/1810.09202.pdf
- TarMAC https://arxiv.org/pdf/1810.11187.pdf