Nicholas Soures Ph.D. Dissertation Defense: Lifelong Learning in Spiking Networks Through Neural Plasticity

Ph.D. Dissertation Defense: Nicholas Soures

Lifelong Learning in Spiking Networks Through Neural Plasticity

Advisor: Dr. Dhireesha Kudithipudi

Ph.D. in Electrical and Computer Engineering

Rochester Institute of Technology

Tuesday, November 14, 2023, 3:00 p.m. EST

Via Zoom Link: https://rit.zoom.us/j/93805146626

Lifelong learning, the ability to learn from continually changing data distributions in real-time, is a significant challenge in artificial intelligence. The central issue is that new learning tends to interfere with previously acquired memories, a phenomenon known as catastrophic forgetting. Surrounding this are several other challenges associated with this, such as knowledge transfer and adaptation, few-shot learning, and processing of noisy data. Since humans do not seem to suffer from this problem, researchers have applied biologically inspired techniques to address it, including metaplasticity, synaptic consolidation and memory replay. Although these approaches have seen some success in traditional neural networks, there is limited exploration of how to support lifelong learning in spiking networks.

In general, spiking neural networks are efficient for resource-constrained environments due to inherent sparse, asynchronous, and low-precision computation. However, spiking networks require a different set of learning rules from traditional rate-based models. %Few works highlight that the application of simple Hebbian rules can support aspects of lifelong learning. However, there is a significant gap in understanding how spiking networks can solve complex tasks experienced in a lifelong learning setting. Few works highlight that the application of simple Hebbian rules can support aspects of lifelong learning. However, there is a significant gap in understanding how spiking networks can solve complex tasks experienced in a lifelong learning setting. We propose to address this by using compositional biological mechanisms, where these networks can overcome the limitations of simple Hebbian models.

In this work, we present NACHOS, a model that integrates multiple biologically inspired mechanisms to promote lifelong learning at both the synapse-level and the network-level. At the synapse-level, NACHOS uses regularization and homeostatic mechanisms to protect the information learned over time. At the network level, NACHOS introduces heterogeneous learning rules and dynamic architecture to form distributed processing across tasks without impeding learning. These mechanisms work in tandem to significantly boost the performance of baseline spiking networks by ~3x in a lifelong learning scenario. Unlike Hebbian approaches, they are also scalable beyond a single-layer. The key features of NACHOS are that (a) it operates without task knowledge, (b) it is evaluated on online continual learning, (c) it does not grow over time, and (d) it has better energy - accuracy trade-offs compared to existing rate-based models. This enables the model to be deployed in the wild, where the AI system is not aware of task switching. NACHOS is demonstrated on several lifelong learning scenarios, where it matches performance of state-of-the-art non-spiking lifelong learning models. In summary, we explore the role of spiking networks in lifelong learning and provide a blueprint for adoption into resource-constrained environments.


Contact
Cami Konecky
5854756990
Event Snapshot
When and Where
November 14, 2023
3:00 pm - 6:00 pm
Room/Location: https://rit.zoom.us/j/93805146626
Who

Open to the Public

Interpreter Requested?

No

Topics
research