# GoodAI Grants

2
9
4
,
0
0
0

## Bayesian Online Meta-Learning (BOML) for continual & gradual learning

Pauching Yap, PhD candidate at University College London, has recieved a grant to work on aims to insert Badger principles into an existing state-of-the art continual meta-learning framework.

## Meta-learning and combinatorial generalization

Ferran Alet and the team at MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have been awarded funding from GoodAI to develop AI that can extrapolate to novel inputs, allow for broader generalization and scale from simple to complex problems with compositions of simple solutions.

## Creating a new framework for multi-agent AI systems

Martin Biehl and Araya have been awarded a grant which will be used to produce a new framework for multi-agent AI systems that aims to advance the field of multi-agent learning and inform the development of GoodAI’s Badger architecture.

## A new kind of open-ended environment for a new kind of artificial intelligence

Lisa Soros from Cross Labs in Japan has received a grant from GoodAI in order to build a complex open-ended environment that will encourage AI to learn in a similar way to humans.

## How novel behaviors arise in AI

GoodAI awarded a grant to Tomáš Mikolov and his team from the Czech Institute of Informatics, Robotics, and Cybernetics CTU. For exploring how novel behaviors arise in artificial intelligence.

### Suggested topics

The grant money is being awarded to researchers or research groups for work that can build on and improve Badger architecture. Below you can find a list of suggested topics. However, money may also be awarded for “wild card” submissions that don’t fit within these topics. The project proposals have to demonstrate that they will contribute to pushing forward the ideas related to Badger architecture.

• Lifelong Learning
• Modular Meta-Learning
• Open-Endedness
• Meta-Learning
• Learned Optimizers
• Program Composition and Synthesis
• Optimization in Modular Systems
• Multi-Agent Reinforcement Learning
• Graph Neural Networks
• Learned Communication

### Research ideas

• How to prove that the learning/adaptation to a novel task is happening inside the inner loop, and not thanks to the outer loop?
• How to prove that the learned learning capabilities can expand in an open-ended manner (not converge), making recursive self-improvement possible?
• How to prove that the learned learning policy did not overfit to the training tasks, and will generalize to novel testing tasks?
• Research that investigates the potential benefits of multi-agent or communication-based approaches even when the task or environment can be solved by a single agent. When is taking a multi-agent approach to cognition inherently beneficial, and are there things which can be done with multiple agents which are hard to do with single agents? For example, systems that may be optimizing multiple conflicting loss functions.
• Research into dynamic goals, where the agent or agents in an environment do not receive an explicit external reward but must generate rewards for themselves or each other. How can such systems be evaluated, how can goals be discovered that satisfy some external intention even without providing explicit supervision with regards to that intention, what do the dynamics of goal-negotiation look like in artificial societies?
• Research into generalization outside of the bounds of the training set: how to make architectures or approaches which extrapolate, rather than interpolate? In particular, research into the generalization properties of learned learning algorithms – how to learn task-independent improvements, even far from the task distribution?
• Research into dynamically scalable architectures: how to make architectures which can take advantage of new computational or informational resources as they become available; how to make meta-learning approaches which scale in an open-ended way with additional evidence or contextual information; architectures which can take advantage of expanded computational budgets at inference time.
• Research into ‘opportunistic’ AI in reinforcement settings: what determines whether an approach or learned policy can take advantage of new unseen beneficial elements injected into their environment without needing to be retrained? What determines the flexibility of a policy to take advantage of new, easier paths or approaches to the goal? How to make AI that knows ‘why’ it does something a certain way and can adapt accordingly when those reasons change?
• Research into dual inheritance in the context of artificial intelligence: methods by which trained models (or models which learn in an online fashion) can exchange cultural information in order to obtain synergistic improvements or keep skills or knowledge alive; studies of sets of communicating or knowledge-exchanging agents to understand how cultural transmission might work in artificial contexts; studies into how cultural transmission can be improved or optimized as part of a training procedure.
• How to leverage the benefits of modular systems in a meta-learning setting? I.e.: is it possible to make a modular system that has better properties for meta-learning than a monolithic one (e.g. addressing the catastrophic forgetting, efficient re-combination of current knowledge in new domains…)? How?

### How to apply for a grant

Applicants will apply via GoodAI application form. In their proposals applicants will:

• Suggest a concrete project which would contribute to the advancement of Badger Architecture
• Include how the idea will be developed into a prototype or proof of concept.
• Simple budget and timetable including how much time and money will be needed for the project
• A Curriculum Vitae for the researcher or the Principal Investigator
• Send their application in English

### Who is eligible for grants?

Grants are open to individuals and groups of individuals based in any country. Applicants can be independent researchers or be affiliated to universities, non-profits, or businesses. However, successful applicants will have to sign a contract ensuring that the work on the project, and accepting the grant, is not in any conflict with their legal obligations to any third party.

### Duration, reporting, and payments

The expected duration of the projects is between 12 and 24 months. Successful proposals will include a timetable of the project and the money will be awarded in semi-annual installments throughout the duration of the project. Successful applicants will be required to report twice per year on the progress of their research and must demonstrate that the project is progressing appropriately, and continues to be consistent with the original proposal or updates approved by GoodAI.