- Researchers at MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have received funding from research and development company GoodAI, to work on the problem of how AI can solve new tasks efficiently by leveraging previous knowledge. The research will be led by PhD student Ferran Alet.
- The researchers intend to use a combination of self-improving modular meta-learning with a polynomial-time inner loop which will allow AI models to generalize to new tasks from small amounts of experience by mastering combinatorial generalization.
- The aim is to develop AI that can extrapolate to novel inputs, allow for broader generalization and scale from simple to complex problems with compositions of simple solutions.
Self-improving artificial intelligence that can learn new tasks from small amounts of data is a crucial step for the advancement of strong artificial intelligence. Researchers at MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) are being funded by research and development company, GoodAI, to tackle this problem.
The grant is part of the ongoing GoodAI Grants initiative, which has awarded over $600,000 so far. The initiative is supporting research groups across the world that are solving problems related to GoodAI’s Badger architecture. The vision is that each research grant, along with the work of the GoodAI research team, will contribute in some way to basic AI research, and all together fill in some of the gaps in the roadmap to advanced, increasingly human-like AI.
Marek Rosa, CEO, and CTO of GoodAI said: “Through these grants, we are aiming to build a community of researchers across the world who are working towards a common goal. We are very pleased to be collaborating with the team at MIT-CSAIL and believe their work on modular meta-learning, few-shot learning, and generalization has close links to our Badger architecture and will be of great benefit to the advancement of artificial intelligence.
The researchers intend to use a combination of self-improving modular meta-learning with a polynomial-time inner loop which will allow AI models to generalize to new tasks from small amounts of experience by mastering combinatorial generalization. Most approaches to this problem consist of a single monolithic neural network whose architecture remains fixed and adapts slowly. Using modular meta-learning will allow the team to train a collection of neural modules that can adapt and reorganize to solve novel problems while maintaining the modular connectivity search at a tractable level.
The idea is that this ability to adapt will have two main advantages in research. Firstly, it should allow for broader generalization, similar to how in the human language we can express an infinite number of ideas with a relatively limited vocabulary. Secondly, it should provide a path to scalable continual learning where the model will be able to scale from simple problems with simple solutions to complex problems with compositions of simple solutions.
The research will be led by MIT-CSAIL Professors Leslie Pack Kaelbling and Tomas Lozano-Perez, and PhD student Ferran Alet.
Ferran Alet said: “There is an exponential number of ways of combining known concepts in novel ways, which allows for very broad, flexible generalization. At the same time, this exponential increase in complexity makes it hard to search through all the space of possibilities. Our project aims to find new ways of doing this inference effectively to build systems that generalize both broadly and quickly.”
Collaborate with us
For the latest from our blog sign up for our newsletter.