Noisin: Unbiased Regularization for Recurrent Neural Networks

Adji Bousso Dieng, Rajesh Ranganath, Jaan Altosaar, David Blei
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1252-1261, 2018.

Abstract

Recurrent neural networks (RNNs) are powerful models of sequential data. They have been successfully used in domains such as text and speech. However, RNNs are susceptible to overfitting; regularization is important. In this paper we develop Noisin, a new method for regularizing RNNs. Noisin injects random noise into the hidden states of the RNN and then maximizes the corresponding marginal likelihood of the data. We show how Noisin applies to any RNN and we study many different types of noise. Noisin is unbiased–it preserves the underlying RNN on average. We characterize how Noisin regularizes its RNN both theoretically and empirically. On language modeling benchmarks, Noisin improves over dropout by as much as 12.2% on the Penn Treebank and 9.4% on the Wikitext-2 dataset. We also compared the state-of-the-art language model of Yang et al. 2017, both with and without Noisin. On the Penn Treebank, the method with Noisin more quickly reaches state-of-the-art performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-dieng18a, title = {Noisin: Unbiased Regularization for Recurrent Neural Networks}, author = {Dieng, Adji Bousso and Ranganath, Rajesh and Altosaar, Jaan and Blei, David}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1252--1261}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/dieng18a/dieng18a.pdf}, url = {https://proceedings.mlr.press/v80/dieng18a.html}, abstract = {Recurrent neural networks (RNNs) are powerful models of sequential data. They have been successfully used in domains such as text and speech. However, RNNs are susceptible to overfitting; regularization is important. In this paper we develop Noisin, a new method for regularizing RNNs. Noisin injects random noise into the hidden states of the RNN and then maximizes the corresponding marginal likelihood of the data. We show how Noisin applies to any RNN and we study many different types of noise. Noisin is unbiased–it preserves the underlying RNN on average. We characterize how Noisin regularizes its RNN both theoretically and empirically. On language modeling benchmarks, Noisin improves over dropout by as much as 12.2% on the Penn Treebank and 9.4% on the Wikitext-2 dataset. We also compared the state-of-the-art language model of Yang et al. 2017, both with and without Noisin. On the Penn Treebank, the method with Noisin more quickly reaches state-of-the-art performance.} }
Endnote
%0 Conference Paper %T Noisin: Unbiased Regularization for Recurrent Neural Networks %A Adji Bousso Dieng %A Rajesh Ranganath %A Jaan Altosaar %A David Blei %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-dieng18a %I PMLR %P 1252--1261 %U https://proceedings.mlr.press/v80/dieng18a.html %V 80 %X Recurrent neural networks (RNNs) are powerful models of sequential data. They have been successfully used in domains such as text and speech. However, RNNs are susceptible to overfitting; regularization is important. In this paper we develop Noisin, a new method for regularizing RNNs. Noisin injects random noise into the hidden states of the RNN and then maximizes the corresponding marginal likelihood of the data. We show how Noisin applies to any RNN and we study many different types of noise. Noisin is unbiased–it preserves the underlying RNN on average. We characterize how Noisin regularizes its RNN both theoretically and empirically. On language modeling benchmarks, Noisin improves over dropout by as much as 12.2% on the Penn Treebank and 9.4% on the Wikitext-2 dataset. We also compared the state-of-the-art language model of Yang et al. 2017, both with and without Noisin. On the Penn Treebank, the method with Noisin more quickly reaches state-of-the-art performance.
APA
Dieng, A.B., Ranganath, R., Altosaar, J. & Blei, D.. (2018). Noisin: Unbiased Regularization for Recurrent Neural Networks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1252-1261 Available from https://proceedings.mlr.press/v80/dieng18a.html.

Related Material