[edit]
Combining Neural Network Regression Estimates Using Principal Components
Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, PMLR R1:363-370, 1997.
Abstract
Combining a set of learned models1 to improve classification and regression estimates has been an area ofmuch research in machine learning and neural net- works ([Wolpert92, Merz95 , PerroneCooper92 , LeblancTibshirani93, Breiman92, Meir95, Krogh95, Tresp95, ChanStolfo95]). The challenge of this problem is to decide which models to rely on for prediction and how much weight to give each. The goal of combining learned models is to obtain a more accurate predic- tion than can be obtained from any single source alone. One major issue in combining a set of learned models is redundancy. Redundancy refers to the amount of agreement or linear dependence between models when making a set of predictions The more the set agrees, the more redundancy is present. In statistical terms, this is referred to as the multicollinearity problem. The focus of this paper is to describe and evaluate an approach for combining regression estimates based on principal components regression. The method, called PCR*, is then evaluated on several real-world domains to demonstrate its robustness versus a collection of existing techniques.