[edit]
Online Incremental Feature Learning with Denoising Autoencoders
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:1453-1461, 2012.
Abstract
While determining model complexity is an important problem in machine learning, many feature learning algorithms rely on cross-validation to choose an optimal number of features, which is usually infeasible for online learning from a massive stream of data. In this paper, we propose an incremental feature learning algorithm to determine the optimal model complexity for large-scale, online datasets based on the denoising autoencoder. This algorithm is composed of two processes: adding features and merging features. Specifically, it adds new features to minimize the objective function’s residual and merges similar features to obtain a compact feature representation and prevent over-fitting. Our experiments show that the model quickly converges to the optimal number of features in a large-scale online setting, and outperforms the (non-incremental) denoising autoencoder, as well as deep belief networks and stacked denoising autoencoders for classification tasks. Further, the algorithm is particularly effective in recognizing new patterns when the data distribution changes over time in the massive online data stream.