QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization

Hartmut Neven, Vasil S. Denchev, Geordie Rose, William G. Macready
Proceedings of the Asian Conference on Machine Learning, PMLR 25:333-348, 2012.

Abstract

We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variables of low bit depth. Next we propose QBoost as an iterative training algorithm in which a subset of weak classifiers is selected by solving a hard optimization problem in each iteration. A strong classifier is incrementally constructed by concatenating the subsets of weak classifiers. We supplement the findings with experiments on one synthetic and two natural data sets and compare against the performance of existing boosting algorithms. Finally, by conducting a quantum Monte Carlo simulation we gather evidence that adiabatic quantum optimization is able to handle the discrete optimization problems generated by QBoost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v25-neven12, title = {QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization}, author = {Neven, Hartmut and Denchev, Vasil S. and Rose, Geordie and Macready, William G.}, booktitle = {Proceedings of the Asian Conference on Machine Learning}, pages = {333--348}, year = {2012}, editor = {Hoi, Steven C. H. and Buntine, Wray}, volume = {25}, series = {Proceedings of Machine Learning Research}, address = {Singapore Management University, Singapore}, month = {04--06 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v25/neven12/neven12.pdf}, url = {https://proceedings.mlr.press/v25/neven12.html}, abstract = {We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variables of low bit depth. Next we propose QBoost as an iterative training algorithm in which a subset of weak classifiers is selected by solving a hard optimization problem in each iteration. A strong classifier is incrementally constructed by concatenating the subsets of weak classifiers. We supplement the findings with experiments on one synthetic and two natural data sets and compare against the performance of existing boosting algorithms. Finally, by conducting a quantum Monte Carlo simulation we gather evidence that adiabatic quantum optimization is able to handle the discrete optimization problems generated by QBoost.} }
Endnote
%0 Conference Paper %T QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization %A Hartmut Neven %A Vasil S. Denchev %A Geordie Rose %A William G. Macready %B Proceedings of the Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2012 %E Steven C. H. Hoi %E Wray Buntine %F pmlr-v25-neven12 %I PMLR %P 333--348 %U https://proceedings.mlr.press/v25/neven12.html %V 25 %X We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variables of low bit depth. Next we propose QBoost as an iterative training algorithm in which a subset of weak classifiers is selected by solving a hard optimization problem in each iteration. A strong classifier is incrementally constructed by concatenating the subsets of weak classifiers. We supplement the findings with experiments on one synthetic and two natural data sets and compare against the performance of existing boosting algorithms. Finally, by conducting a quantum Monte Carlo simulation we gather evidence that adiabatic quantum optimization is able to handle the discrete optimization problems generated by QBoost.
RIS
TY - CPAPER TI - QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization AU - Hartmut Neven AU - Vasil S. Denchev AU - Geordie Rose AU - William G. Macready BT - Proceedings of the Asian Conference on Machine Learning DA - 2012/11/17 ED - Steven C. H. Hoi ED - Wray Buntine ID - pmlr-v25-neven12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 25 SP - 333 EP - 348 L1 - http://proceedings.mlr.press/v25/neven12/neven12.pdf UR - https://proceedings.mlr.press/v25/neven12.html AB - We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variables of low bit depth. Next we propose QBoost as an iterative training algorithm in which a subset of weak classifiers is selected by solving a hard optimization problem in each iteration. A strong classifier is incrementally constructed by concatenating the subsets of weak classifiers. We supplement the findings with experiments on one synthetic and two natural data sets and compare against the performance of existing boosting algorithms. Finally, by conducting a quantum Monte Carlo simulation we gather evidence that adiabatic quantum optimization is able to handle the discrete optimization problems generated by QBoost. ER -
APA
Neven, H., Denchev, V.S., Rose, G. & Macready, W.G.. (2012). QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization. Proceedings of the Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 25:333-348 Available from https://proceedings.mlr.press/v25/neven12.html.

Related Material