Accelerating Convergence in Bayesian Few-Shot Classification

Tianjun Ke, Haoqun Cao, Feng Zhou
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:23391-23406, 2024.

Abstract

Bayesian few-shot classification has been a focal point in the field of few-shot learning. This paper seamlessly integrates mirror descent-based variational inference into Gaussian process-based few-shot classification, addressing the challenge of non-conjugate inference. By leveraging non-Euclidean geometry, mirror descent achieves accelerated convergence by providing the steepest descent direction along the corresponding manifold. It also exhibits the parameterization invariance property concerning the variational distribution. Experimental results demonstrate competitive classification accuracy, improved uncertainty quantification, and faster convergence compared to baseline models. Additionally, we investigate the impact of hyperparameters and components. Code is publicly available at https://github.com/keanson/MD-BSFC.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-ke24a, title = {Accelerating Convergence in {B}ayesian Few-Shot Classification}, author = {Ke, Tianjun and Cao, Haoqun and Zhou, Feng}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {23391--23406}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/ke24a/ke24a.pdf}, url = {https://proceedings.mlr.press/v235/ke24a.html}, abstract = {Bayesian few-shot classification has been a focal point in the field of few-shot learning. This paper seamlessly integrates mirror descent-based variational inference into Gaussian process-based few-shot classification, addressing the challenge of non-conjugate inference. By leveraging non-Euclidean geometry, mirror descent achieves accelerated convergence by providing the steepest descent direction along the corresponding manifold. It also exhibits the parameterization invariance property concerning the variational distribution. Experimental results demonstrate competitive classification accuracy, improved uncertainty quantification, and faster convergence compared to baseline models. Additionally, we investigate the impact of hyperparameters and components. Code is publicly available at https://github.com/keanson/MD-BSFC.} }
Endnote
%0 Conference Paper %T Accelerating Convergence in Bayesian Few-Shot Classification %A Tianjun Ke %A Haoqun Cao %A Feng Zhou %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-ke24a %I PMLR %P 23391--23406 %U https://proceedings.mlr.press/v235/ke24a.html %V 235 %X Bayesian few-shot classification has been a focal point in the field of few-shot learning. This paper seamlessly integrates mirror descent-based variational inference into Gaussian process-based few-shot classification, addressing the challenge of non-conjugate inference. By leveraging non-Euclidean geometry, mirror descent achieves accelerated convergence by providing the steepest descent direction along the corresponding manifold. It also exhibits the parameterization invariance property concerning the variational distribution. Experimental results demonstrate competitive classification accuracy, improved uncertainty quantification, and faster convergence compared to baseline models. Additionally, we investigate the impact of hyperparameters and components. Code is publicly available at https://github.com/keanson/MD-BSFC.
APA
Ke, T., Cao, H. & Zhou, F.. (2024). Accelerating Convergence in Bayesian Few-Shot Classification. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:23391-23406 Available from https://proceedings.mlr.press/v235/ke24a.html.

Related Material