Expert advice problem with noisy low rank loss

Yaxiong Liu, Xuanke Jiang, Kohei Hatano, Eiji Takimoto
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:1097-1112, 2021.

Abstract

We consider the expert advice problem with a low rank but noisy loss sequence, where a loss vector $l_{t} \in [-1,1]^N$ in each round $t$ is of the form $l_{t} = U v_{t} + \epsilon_{t}$ for some fixed but unknown $N \times d$ matrix $U$ called the kernel, some $d$-dimensional seed vector $v_{t} \in \mathbb{R}^{d}$, and some additional noisy term $\epsilon_t \in \mathbb{R}^{N}$ whose norm is bounded by $\epsilon$. This is a generalization of the works of Hazan et al. and Barman et al., where the former only treats noiseless loss and the latter assumes that the kernel is known in advance. In this paper, we propose an algorithm, where we re-construct the kernel under the assumptions, that the low rank loss is noised and there is no prior information about kernel. In this algorithm, we approximate the kernel by choosing a set of loss vectors with a high degree of independence from each other, and we give a regret bound of $O(d\sqrt{T}+d^{4/3}(N\epsilon)^{1/3}\sqrt{T})$. Moreover, even if in experiment, the proposed algorithm performs better than Hazan’s algorithm and Hedge algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-liu21d, title = {Expert advice problem with noisy low rank loss}, author = {Liu, Yaxiong and Jiang, Xuanke and Hatano, Kohei and Takimoto, Eiji}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {1097--1112}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/liu21d/liu21d.pdf}, url = {https://proceedings.mlr.press/v157/liu21d.html}, abstract = {We consider the expert advice problem with a low rank but noisy loss sequence, where a loss vector $l_{t} \in [-1,1]^N$ in each round $t$ is of the form $l_{t} = U v_{t} + \epsilon_{t}$ for some fixed but unknown $N \times d$ matrix $U$ called the kernel, some $d$-dimensional seed vector $v_{t} \in \mathbb{R}^{d}$, and some additional noisy term $\epsilon_t \in \mathbb{R}^{N}$ whose norm is bounded by $\epsilon$. This is a generalization of the works of Hazan et al. and Barman et al., where the former only treats noiseless loss and the latter assumes that the kernel is known in advance. In this paper, we propose an algorithm, where we re-construct the kernel under the assumptions, that the low rank loss is noised and there is no prior information about kernel. In this algorithm, we approximate the kernel by choosing a set of loss vectors with a high degree of independence from each other, and we give a regret bound of $O(d\sqrt{T}+d^{4/3}(N\epsilon)^{1/3}\sqrt{T})$. Moreover, even if in experiment, the proposed algorithm performs better than Hazan’s algorithm and Hedge algorithm.} }
Endnote
%0 Conference Paper %T Expert advice problem with noisy low rank loss %A Yaxiong Liu %A Xuanke Jiang %A Kohei Hatano %A Eiji Takimoto %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-liu21d %I PMLR %P 1097--1112 %U https://proceedings.mlr.press/v157/liu21d.html %V 157 %X We consider the expert advice problem with a low rank but noisy loss sequence, where a loss vector $l_{t} \in [-1,1]^N$ in each round $t$ is of the form $l_{t} = U v_{t} + \epsilon_{t}$ for some fixed but unknown $N \times d$ matrix $U$ called the kernel, some $d$-dimensional seed vector $v_{t} \in \mathbb{R}^{d}$, and some additional noisy term $\epsilon_t \in \mathbb{R}^{N}$ whose norm is bounded by $\epsilon$. This is a generalization of the works of Hazan et al. and Barman et al., where the former only treats noiseless loss and the latter assumes that the kernel is known in advance. In this paper, we propose an algorithm, where we re-construct the kernel under the assumptions, that the low rank loss is noised and there is no prior information about kernel. In this algorithm, we approximate the kernel by choosing a set of loss vectors with a high degree of independence from each other, and we give a regret bound of $O(d\sqrt{T}+d^{4/3}(N\epsilon)^{1/3}\sqrt{T})$. Moreover, even if in experiment, the proposed algorithm performs better than Hazan’s algorithm and Hedge algorithm.
APA
Liu, Y., Jiang, X., Hatano, K. & Takimoto, E.. (2021). Expert advice problem with noisy low rank loss. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:1097-1112 Available from https://proceedings.mlr.press/v157/liu21d.html.

Related Material