[edit]
Quick Training of Probabilistic Neural Nets by Importance Sampling
Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, PMLR R4:17-24, 2003.
Abstract
Our previous work on statistical language modeling introduced the use of probabilistic feedforward neural networks to help dealing with the curse of dimensionality. Training this model by maximum likelihood however requires for each example to perform as many network passes as there are words in the vocabulary. Inspired by the contrastive divergence model, we propose and evaluate sampling-based methods which require network passes only for the observed "positive example" and a few sampled negative example words. A very significant speed-up is obtained with an adaptive importance sampling.