Online Stochastic Linear Optimization under One-bit Feedback

Lijun Zhang, Tianbao Yang, Rong Jin, Yichi Xiao, Zhi-hua Zhou
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:392-401, 2016.

Abstract

In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world applications. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed algorithm achieves a regret bound of O(d\sqrtT), which matches the optimal result of stochastic linear bandits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-zhangb16, title = {Online Stochastic Linear Optimization under One-bit Feedback}, author = {Zhang, Lijun and Yang, Tianbao and Jin, Rong and Xiao, Yichi and Zhou, Zhi-hua}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {392--401}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/zhangb16.pdf}, url = {https://proceedings.mlr.press/v48/zhangb16.html}, abstract = {In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world applications. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed algorithm achieves a regret bound of O(d\sqrtT), which matches the optimal result of stochastic linear bandits.} }
Endnote
%0 Conference Paper %T Online Stochastic Linear Optimization under One-bit Feedback %A Lijun Zhang %A Tianbao Yang %A Rong Jin %A Yichi Xiao %A Zhi-hua Zhou %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-zhangb16 %I PMLR %P 392--401 %U https://proceedings.mlr.press/v48/zhangb16.html %V 48 %X In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world applications. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed algorithm achieves a regret bound of O(d\sqrtT), which matches the optimal result of stochastic linear bandits.
RIS
TY - CPAPER TI - Online Stochastic Linear Optimization under One-bit Feedback AU - Lijun Zhang AU - Tianbao Yang AU - Rong Jin AU - Yichi Xiao AU - Zhi-hua Zhou BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-zhangb16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 392 EP - 401 L1 - http://proceedings.mlr.press/v48/zhangb16.pdf UR - https://proceedings.mlr.press/v48/zhangb16.html AB - In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world applications. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed algorithm achieves a regret bound of O(d\sqrtT), which matches the optimal result of stochastic linear bandits. ER -
APA
Zhang, L., Yang, T., Jin, R., Xiao, Y. & Zhou, Z.. (2016). Online Stochastic Linear Optimization under One-bit Feedback. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:392-401 Available from https://proceedings.mlr.press/v48/zhangb16.html.

Related Material