Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning

Yizhou Chen, Shizhuo Zhang, Bryan Kian Hsiang Low
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9091-9113, 2022.

Abstract

This paper addresses the problem of active task selection which involves selecting the most informative tasks for meta-learning. We propose a novel active task selection criterion based on the mutual information between latent task vectors. Unfortunately, such a criterion scales poorly in the number of candidate tasks when optimized. To resolve this issue, we exploit the submodularity property of our new criterion for devising the first active task selection algorithm for meta-learning with a near-optimal performance guarantee. To further improve our efficiency, we propose an online variant of the Stein variational gradient descent to perform fast belief updates of the meta-parameters via maintaining a set of forward (and backward) particles when learning (or unlearning) from each selected task. We empirically demonstrate the performance of our proposed algorithm on real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-chen22h, title = { Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning }, author = {Chen, Yizhou and Zhang, Shizhuo and Kian Hsiang Low, Bryan}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9091--9113}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/chen22h/chen22h.pdf}, url = {https://proceedings.mlr.press/v151/chen22h.html}, abstract = { This paper addresses the problem of active task selection which involves selecting the most informative tasks for meta-learning. We propose a novel active task selection criterion based on the mutual information between latent task vectors. Unfortunately, such a criterion scales poorly in the number of candidate tasks when optimized. To resolve this issue, we exploit the submodularity property of our new criterion for devising the first active task selection algorithm for meta-learning with a near-optimal performance guarantee. To further improve our efficiency, we propose an online variant of the Stein variational gradient descent to perform fast belief updates of the meta-parameters via maintaining a set of forward (and backward) particles when learning (or unlearning) from each selected task. We empirically demonstrate the performance of our proposed algorithm on real-world datasets. } }
Endnote
%0 Conference Paper %T Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning %A Yizhou Chen %A Shizhuo Zhang %A Bryan Kian Hsiang Low %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-chen22h %I PMLR %P 9091--9113 %U https://proceedings.mlr.press/v151/chen22h.html %V 151 %X This paper addresses the problem of active task selection which involves selecting the most informative tasks for meta-learning. We propose a novel active task selection criterion based on the mutual information between latent task vectors. Unfortunately, such a criterion scales poorly in the number of candidate tasks when optimized. To resolve this issue, we exploit the submodularity property of our new criterion for devising the first active task selection algorithm for meta-learning with a near-optimal performance guarantee. To further improve our efficiency, we propose an online variant of the Stein variational gradient descent to perform fast belief updates of the meta-parameters via maintaining a set of forward (and backward) particles when learning (or unlearning) from each selected task. We empirically demonstrate the performance of our proposed algorithm on real-world datasets.
APA
Chen, Y., Zhang, S. & Kian Hsiang Low, B.. (2022). Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9091-9113 Available from https://proceedings.mlr.press/v151/chen22h.html.

Related Material