Lock those weights.

Once a neural network has been trained on a task is it impossible to train it on on a second task without erasing the first task. This is what we call catastrophic forgetting.

Thankfully, DeepMind figured out a way to fix it.

The trick is to lock the weights that are used to solve the first task when training for a new task. Here is the animation of DeepMind’s website. DeepMind was heavily inspired by the synapse consolidation happening in our brain. In the brain, the plasticity (modification ability) of synapses that are essential to previous tasks is reduced as we learn.

By locking those weights, neural networks are able to learn a new task without forgetting the previous one(s).

From https://deepmind.com/blog/enabling-continual-learning-in-neural-networks/ The scale is normalized based on human performance being 1.

Results

Elastic Weight Conditioning (EWC) works! As you can see on the graph, EWC performs significantly better than normal learning algorithm (the no penalty curve).