By Petr Hlubuček
At GoodAI, we’re interested in multi-agent architectures that can learn to rapidly adapt to new and unseen environments. In our proposed architecture (for more see Badger), we expect the behavior and adaptation to be learned through communication of homogeneous units inside a single agent, allowing for better generalization.
We believe that the right curriculum is key to the search of the effective learning policy inside the agent’s brain. For this reason, we were curious to experiment with the automated curricula generated in a differentiable way, and we partially implemented the Generative Teaching Networks (GTN) by Such et al. (Uber AI Labs, 2019).
We were motivated to:
- create tasks and environments for our agent with gradually increasing difficulty,
- analyze existing GTN tasks/environments for graduality properties – i.e. if learning of some task makes it easier to learn some other task.
I would like to thank my colleague Martin Poliak for helping me with this project. This is the first implementation for PyTorch and we are happy to contribute to the AI community efforts.
We’d love to hear your feedback. If you’re interested to discuss the curricula that facilitate gradual and continuous learning, or work with us on general AI, get in touch!
Read the full blog and get our GTN implementation here.