[edit]
Robust Forward Algorithms via PAC-Bayes and Laplace Distributions
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:678-686, 2014.
Abstract
Laplace random variables are commonly used to model extreme noise in many fields, while systems trained to deal with such noises are often characterized by robustness properties. We introduce new learning algorithms that minimize objectives derived directly from PAC-Bayes bounds, incorporating Laplace distributions. The resulting algorithms are regulated by the Huber loss function and are robust to noise, as the Laplace distribution integrated large deviation of parameters. We analyze the convexity properties of the objective, and propose a few bounds which are fully convex, two of which jointly convex in the mean and standard-deviation under certain conditions. We derive new forward algorithms analogous to recent boosting algorithms, providing novel relations between boosting and PAC-Bayes analysis. Experiments show that our algorithms outperforms AdaBoost, L1-LogBoost, and RobustBoost in a wide range of input noise.