The grant money is being awarded to researchers or research groups for work that can build on and improve Badger architecture. Below you can find a list of suggested topics. However, money may also be awarded for “wild card” submissions that don’t fit within these topics. The project proposals have to demonstrate that they will contribute to pushing forward the ideas related to Badger architecture.
- Agent as a society of sub-agents
- Cultural Evolution
- Artificial Life
- Lifelong Learning
- Modular Meta-Learning
- Recursive Self-Improvement
- Multi-agent Learning
- Multi-Agent Reinforcement Learning
- Learned Communication
- Emergent Languages
- Decentralized Learning
- Learned Optimizers
- Optimization in Modular Systems
Below are some of the published papers that have resulted from GoodAI Grants funding:
Interpreting systems as solving POMDPs: a step towards a formal understanding of agency
Emergence of Novelty in Evolutionary Algorithms
Noether Networks: Meta-Learning Useful Conserved Quantities
Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning
Functional Regularization for Reinforcement Learning via Learned Fourier Features
A large-scale benchmark for few-shot program induction and synthesis
Discovering and Achieving Goals via World Models
Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation
- How to prove that the learning/adaptation to a novel task is happening inside the inner loop, and not thanks to the outer loop?
- How to prove that the learned learning capabilities can expand in an open-ended manner (not converge), making recursive self-improvement possible?
- How to prove that the learned learning policy did not overfit to the training tasks, and will generalize to novel testing tasks?
- Research that investigates the potential benefits of multi-agent or communication-based approaches even when the task or environment can be solved by a single agent. When is taking a multi-agent approach to cognition inherently beneficial, and are there things which can be done with multiple agents which are hard to do with single agents? For example, systems that may be optimizing multiple conflicting loss functions.
- Research into dynamic goals, where the agent or agents in an environment do not receive an explicit external reward but must generate rewards for themselves or each other. How can such systems be evaluated, how can goals be discovered that satisfy some external intention even without providing explicit supervision with regards to that intention, what do the dynamics of goal-negotiation look like in artificial societies?
- Research into generalization outside of the bounds of the training set: how to make architectures or approaches which extrapolate, rather than interpolate? In particular, research into the generalization properties of learned learning algorithms – how to learn task-independent improvements, even far from the task distribution?
- Research into dynamically scalable architectures: how to make architectures which can take advantage of new computational or informational resources as they become available; how to make meta-learning approaches which scale in an open-ended way with additional evidence or contextual information; architectures which can take advantage of expanded computational budgets at inference time.
- Research into ‘opportunistic’ AI in reinforcement settings: what determines whether an approach or learned policy can take advantage of new unseen beneficial elements injected into their environment without needing to be retrained? What determines the flexibility of a policy to take advantage of new, easier paths or approaches to the goal? How to make AI that knows ‘why’ it does something a certain way and can adapt accordingly when those reasons change?
- Research into dual inheritance in the context of artificial intelligence: methods by which trained models (or models which learn in an online fashion) can exchange cultural information in order to obtain synergistic improvements or keep skills or knowledge alive; studies of sets of communicating or knowledge-exchanging agents to understand how cultural transmission might work in artificial contexts; studies into how cultural transmission can be improved or optimized as part of a training procedure.
- How to leverage the benefits of modular systems in a meta-learning setting? I.e.: is it possible to make a modular system that has better properties for meta-learning than a monolithic one (e.g. addressing the catastrophic forgetting, efficient re-combination of current knowledge in new domains…)? How?
How to apply for a grant
Applicants will apply via GoodAI application form. Applicants will:
- Propose a concrete project which would contribute to the advancement of Badger architecture
- Include how the idea will be developed into a prototype or proof of concept.
- Include a simple budget and timetable including how much time and money will be needed for the project
- Attach a Curriculum Vitae for the researcher or the Principal Investigator
- Send their application in English
**Please note that we are currently not providing new grants. However, you can send speculative applications and subscribe to our newsletter to receive notifications once we reopen for submissions again.
Who is eligible for grants?
Grants are open to individuals and groups of individuals based in any country. Applicants can be independent researchers or be affiliated to universities, non-profits, or businesses. However, successful applicants will have to sign a contract ensuring that the work on the project, and accepting the grant, is not in any conflict with their legal obligations to any third party. Successful Applicants must ensure that GoodAI acquires non-exclusive license to exploit, modify and further develop all outputs from the research project without any limitations.
Duration, reporting, and payments
The expected duration of the projects is between 12 and 24 months. Successful proposals will include a timetable of the project and the money will be awarded in semi-annual installments throughout the duration of the project. Successful applicants will be required to report twice per year on the progress of their research and must demonstrate that the project is progressing appropriately, and continues to be consistent with the original proposal or updates approved by GoodAI.
Terms and conditions
Please read the full Terms and Conditions before sending an application.
To apply please download the form below and return it, with a copy of the Principal Researchers CV.
GoodAI Grants Application Form