Blog

Bridging the Continual Learning Gap with Deep Artificial Neurons

October 26, 2021

(Left) Rolando Estrada, assistant professor of computer science and neuroscience and (Right) Blake Camp, Ph.D. student of GSU.

Summary

  • Rolando Estrada and Blake Camp of Georgia State University receive GoodAI funding for their research on a continual learning framework with modular DANs (Deep Artificial Neurons).
  • Integrating GoodAI’s Badger architecture and principles, Estrada and his team aim to develop a continual semi-supervised learning system capable of joint supervised and unsupervised learning.

Georgia State’s Rolando Estrada and Blake Camp join GoodAI’s grant program with research on Deep Artificial Neurons [1] in the development of a continual learning framework. Addressing a pressing gap in continual learning (CL) — namely the inability to use and label unsupervised data without human guidance — they set out to develop a semi-supervised system capable of joint supervised and unsupervised lifelong learning.

While humans and most organisms cope with an analogous data imbalance in the physical world by efficiently combining supervised and unsupervised learning, nearly all current CL systems are supervised. Taking direction from Badger principles and properties, Camp and Estrada propose to create a policy capable of switching between the two learning phases by embedding standard Deep Artificial Neural Networks (ANNs) with Deep Artificial Neurons (DANs), smaller sub-network layers (also called motifs) with shared parameters.

The training regime for DAN parameters will focus on two main phases: (1) unsupervised only and (2) supervised + unsupervised learning. The investigation will experimentally verify whether implementing a multi-layer network of DANs, meta-trained across both supervised and unsupervised tasks, can work to:

  1. Stabilize features during deployment across phases.
  2. Ensure that supervised and unsupervised features remain in sync with one another without retraining on old data.
  3. Mitigate catastrophic forgetting through off-the-shelf optimizers and backpropagation (e.g., SGD or Adam).

Badger Architecture 

Closely aligned with Camp and Estrada’s research thesis is Badger’s architectural framework. Rather than the use of a monolithic artificial neural network, Badger is composed of many small modules (experts) working to solve whole tasks collaboratively. The system’s distributed design offers them a means to investigate different meta-learning variants with potential for yielding best practices for joint supervised and unsupervised continual learning within a single architecture.

As Camp and Estrada note, while DAN motifs share many properties with Badger experts, such as distributed computation and considerable weight sharing, (DAN motifs differ between each other only in the incoming and outgoing weights, roughly akin to how experts differ within their internal memory states), a primary difference lies in the use of different learning paradigms: backpropagation with gradient descent vs. activation-based learning respectively.

The implementation of modular DANs on Badger will test ways to combine the two update mechanisms for critical insights into their relative strengths and weaknesses.

As with joint supervised and unsupervised continual learning, Estrada and Camp’s explorations with modular DANs are relevant for the Badger research community as a whole. Balancing the supervised and unsupervised signals as well as determining how such signals are distributed among the modules (Badger experts) is key to exploiting unlabeled data in the continual learning setting. Their work speaks directly to ongoing inquiries on enabling Badger to solve tasks normal ANNs are not able to. More specifically, deeper investigation into the application of DANs will support the development of Badger architecture in three key ways:

  1. Shed light on the relative merits of gradient vs. activation-based learning.
  2. Demonstrate best practices for implementing joint supervised and unsupervised continual learning in a single architecture.
  3. Provide an opportunity for exploring the impact of different meta-learning algorithms.

The GoodAI Grants initiative has awarded over $700,000 supporting research groups worldwide solving problems related to the Badger architecture. We are proud to welcome Rolando Estrada and Blake Camp into the GoodAI community.


To date, most of what we consider general AI research is done in academia and inside big corporations. We believe that humanity’s ultimate frontier, creation of general AI, calls for a novel research paradigm, to fit a mission-driven moonshot endeavor. GoodAI Grants is part of our effort to combine the best of both cultures, academic rigor and fast-paced innovation. We aim to create the right conditions to collaborate and cooperate across boundaries. Our goal is to accelerate the progress towards general AI in a safe manner by putting emphasis on community-driven research, which in the future might play a key role in preventing the monopolization of AI technology (see AI race).

If you are interested in Badger Architecture and the work GoodAI does and would like to collaborate, check out our GoodAI Grants opportunities or our Jobs page for open positions!

For the latest from our blog sign up for our newsletter.


References

[1] Camp, B; Mandivarapu, J.K.; Estrada, R. 2020. Continual Learning with Deep Artificial Neurons; arXiv:2011.07035

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions