Fast Parametric Learning with Activation Memorization

Jack Rae, Chris Dyer, Peter Dayan, Timothy Lillicrap
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4228-4237, 2018.

Abstract

Neural networks trained with backpropagation often struggle to identify classes that have been observed a small number of times. In applications where most class labels are rare, such as language modelling, this can become a performance bottleneck. One potential remedy is to augment the network with a fast-learning non-parametric model which stores recent activations and class labels into an external memory. We explore a simplified architecture where we treat a subset of the model parameters as fast memory stores. This can help retain information over longer time intervals than a traditional memory, and does not require additional space or compute. In the case of image classification, we display faster binding of novel classes on an Omniglot image curriculum task. We also show improved performance for word-based language models on news reports (GigaWord), books (Project Gutenberg) and Wikipedia articles (WikiText-103) - the latter achieving a state-of-the-art perplexity of 29.2.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-rae18a, title = {Fast Parametric Learning with Activation Memorization}, author = {Rae, Jack and Dyer, Chris and Dayan, Peter and Lillicrap, Timothy}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4228--4237}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/rae18a/rae18a.pdf}, url = {https://proceedings.mlr.press/v80/rae18a.html}, abstract = {Neural networks trained with backpropagation often struggle to identify classes that have been observed a small number of times. In applications where most class labels are rare, such as language modelling, this can become a performance bottleneck. One potential remedy is to augment the network with a fast-learning non-parametric model which stores recent activations and class labels into an external memory. We explore a simplified architecture where we treat a subset of the model parameters as fast memory stores. This can help retain information over longer time intervals than a traditional memory, and does not require additional space or compute. In the case of image classification, we display faster binding of novel classes on an Omniglot image curriculum task. We also show improved performance for word-based language models on news reports (GigaWord), books (Project Gutenberg) and Wikipedia articles (WikiText-103) - the latter achieving a state-of-the-art perplexity of 29.2.} }
Endnote
%0 Conference Paper %T Fast Parametric Learning with Activation Memorization %A Jack Rae %A Chris Dyer %A Peter Dayan %A Timothy Lillicrap %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-rae18a %I PMLR %P 4228--4237 %U https://proceedings.mlr.press/v80/rae18a.html %V 80 %X Neural networks trained with backpropagation often struggle to identify classes that have been observed a small number of times. In applications where most class labels are rare, such as language modelling, this can become a performance bottleneck. One potential remedy is to augment the network with a fast-learning non-parametric model which stores recent activations and class labels into an external memory. We explore a simplified architecture where we treat a subset of the model parameters as fast memory stores. This can help retain information over longer time intervals than a traditional memory, and does not require additional space or compute. In the case of image classification, we display faster binding of novel classes on an Omniglot image curriculum task. We also show improved performance for word-based language models on news reports (GigaWord), books (Project Gutenberg) and Wikipedia articles (WikiText-103) - the latter achieving a state-of-the-art perplexity of 29.2.
APA
Rae, J., Dyer, C., Dayan, P. & Lillicrap, T.. (2018). Fast Parametric Learning with Activation Memorization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4228-4237 Available from https://proceedings.mlr.press/v80/rae18a.html.

Related Material