How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances?

Senjian An, Farid Boussaid, Mohammed Bennamoun
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:514-523, 2015.

Abstract

This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-an15, title = {How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances?}, author = {An, Senjian and Boussaid, Farid and Bennamoun, Mohammed}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {514--523}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/an15.pdf}, url = {https://proceedings.mlr.press/v37/an15.html}, abstract = {This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer.} }
Endnote
%0 Conference Paper %T How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances? %A Senjian An %A Farid Boussaid %A Mohammed Bennamoun %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-an15 %I PMLR %P 514--523 %U https://proceedings.mlr.press/v37/an15.html %V 37 %X This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer.
RIS
TY - CPAPER TI - How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances? AU - Senjian An AU - Farid Boussaid AU - Mohammed Bennamoun BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-an15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 514 EP - 523 L1 - http://proceedings.mlr.press/v37/an15.pdf UR - https://proceedings.mlr.press/v37/an15.html AB - This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer. ER -
APA
An, S., Boussaid, F. & Bennamoun, M.. (2015). How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances?. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:514-523 Available from https://proceedings.mlr.press/v37/an15.html.

Related Material