The Most Generative Maximum Margin Bayesian Networks

Robert Peharz, Sebastian Tschiatschek, Franz Pernkopf
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):235-243, 2013.

Abstract

Although discriminative learning in graphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generative-discriminative learning for Bayesian networks. We use an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted \ell^1-norm for the SVM-norm-penalization. This simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. For many network structures, our method can be formulated as a convex problem, guaranteeing a globally optimal solution. In terms of classification, the resulting models outperform state-of-the art generative and discriminative learning methods for Bayesian networks, and are comparable with linear and kernelized SVMs. Furthermore, the models achieve likelihoods close to the maximum likelihood solution and show robust behavior in classification experiments with missing features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-peharz13, title = {The Most Generative Maximum Margin Bayesian Networks}, author = {Robert Peharz and Sebastian Tschiatschek and Franz Pernkopf}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {235--243}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/peharz13.pdf}, url = {http://proceedings.mlr.press/v28/peharz13.html}, abstract = {Although discriminative learning in graphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generative-discriminative learning for Bayesian networks. We use an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted \ell^1-norm for the SVM-norm-penalization. This simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. For many network structures, our method can be formulated as a convex problem, guaranteeing a globally optimal solution. In terms of classification, the resulting models outperform state-of-the art generative and discriminative learning methods for Bayesian networks, and are comparable with linear and kernelized SVMs. Furthermore, the models achieve likelihoods close to the maximum likelihood solution and show robust behavior in classification experiments with missing features.} }
Endnote
%0 Conference Paper %T The Most Generative Maximum Margin Bayesian Networks %A Robert Peharz %A Sebastian Tschiatschek %A Franz Pernkopf %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-peharz13 %I PMLR %J Proceedings of Machine Learning Research %P 235--243 %U http://proceedings.mlr.press %V 28 %N 3 %W PMLR %X Although discriminative learning in graphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generative-discriminative learning for Bayesian networks. We use an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted \ell^1-norm for the SVM-norm-penalization. This simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. For many network structures, our method can be formulated as a convex problem, guaranteeing a globally optimal solution. In terms of classification, the resulting models outperform state-of-the art generative and discriminative learning methods for Bayesian networks, and are comparable with linear and kernelized SVMs. Furthermore, the models achieve likelihoods close to the maximum likelihood solution and show robust behavior in classification experiments with missing features.
RIS
TY - CPAPER TI - The Most Generative Maximum Margin Bayesian Networks AU - Robert Peharz AU - Sebastian Tschiatschek AU - Franz Pernkopf BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-peharz13 PB - PMLR SP - 235 DP - PMLR EP - 243 L1 - http://proceedings.mlr.press/v28/peharz13.pdf UR - http://proceedings.mlr.press/v28/peharz13.html AB - Although discriminative learning in graphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generative-discriminative learning for Bayesian networks. We use an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted \ell^1-norm for the SVM-norm-penalization. This simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. For many network structures, our method can be formulated as a convex problem, guaranteeing a globally optimal solution. In terms of classification, the resulting models outperform state-of-the art generative and discriminative learning methods for Bayesian networks, and are comparable with linear and kernelized SVMs. Furthermore, the models achieve likelihoods close to the maximum likelihood solution and show robust behavior in classification experiments with missing features. ER -
APA
Peharz, R., Tschiatschek, S. & Pernkopf, F.. (2013). The Most Generative Maximum Margin Bayesian Networks. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):235-243

Related Material