Heteroscedastic Gaussian Processes and Random Features: Scalable Motion Primitives with Guarantees

Edoardo Caldarelli, Antoine Chatalic, Adrià Colomé, Lorenzo Rosasco, Carme Torras
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3010-3029, 2023.

Abstract

Heteroscedastic Gaussian processes (HGPs) are kernel-based, non-parametric models that can be used to infer nonlinear functions with time-varying noise. In robotics, they can be employed for learning from demonstration as motion primitives, i.e. as a model of the trajectories to be executed by the robot. HGPs provide variance estimates around the reference signal modeling the trajectory, capturing both the predictive uncertainty and the motion variability. However, similarly to standard Gaussian processes they suffer from a cubic complexity in the number of training points, due to the inversion of the kernel matrix. The uncertainty can be leveraged for more complex learning tasks, such as inferring the variable impedance profile required from a robotic manipulator. However, suitable approximations are needed to make HGPs scalable, at the price of potentially worsening the posterior mean and variance profiles. Motivated by these observations, we study the combination of HGPs and random features, which are a popular, data-independent approximation strategy of kernel functions. In a theoretical analysis, we provide novel guarantees on the approximation error of the HGP posterior due to random features. Moreover, we validate this scalable motion primitive on real robot data, related to the problem of variable impedance learning. In this way, we show that random features offer a viable and theoretically sound alternative for speeding up the trajectory processing, without sacrificing accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-caldarelli23a, title = {Heteroscedastic Gaussian Processes and Random Features: Scalable Motion Primitives with Guarantees}, author = {Caldarelli, Edoardo and Chatalic, Antoine and Colom\'{e}, Adri\`{a} and Rosasco, Lorenzo and Torras, Carme}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3010--3029}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/caldarelli23a/caldarelli23a.pdf}, url = {https://proceedings.mlr.press/v229/caldarelli23a.html}, abstract = {Heteroscedastic Gaussian processes (HGPs) are kernel-based, non-parametric models that can be used to infer nonlinear functions with time-varying noise. In robotics, they can be employed for learning from demonstration as motion primitives, i.e. as a model of the trajectories to be executed by the robot. HGPs provide variance estimates around the reference signal modeling the trajectory, capturing both the predictive uncertainty and the motion variability. However, similarly to standard Gaussian processes they suffer from a cubic complexity in the number of training points, due to the inversion of the kernel matrix. The uncertainty can be leveraged for more complex learning tasks, such as inferring the variable impedance profile required from a robotic manipulator. However, suitable approximations are needed to make HGPs scalable, at the price of potentially worsening the posterior mean and variance profiles. Motivated by these observations, we study the combination of HGPs and random features, which are a popular, data-independent approximation strategy of kernel functions. In a theoretical analysis, we provide novel guarantees on the approximation error of the HGP posterior due to random features. Moreover, we validate this scalable motion primitive on real robot data, related to the problem of variable impedance learning. In this way, we show that random features offer a viable and theoretically sound alternative for speeding up the trajectory processing, without sacrificing accuracy.} }
Endnote
%0 Conference Paper %T Heteroscedastic Gaussian Processes and Random Features: Scalable Motion Primitives with Guarantees %A Edoardo Caldarelli %A Antoine Chatalic %A Adrià Colomé %A Lorenzo Rosasco %A Carme Torras %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-caldarelli23a %I PMLR %P 3010--3029 %U https://proceedings.mlr.press/v229/caldarelli23a.html %V 229 %X Heteroscedastic Gaussian processes (HGPs) are kernel-based, non-parametric models that can be used to infer nonlinear functions with time-varying noise. In robotics, they can be employed for learning from demonstration as motion primitives, i.e. as a model of the trajectories to be executed by the robot. HGPs provide variance estimates around the reference signal modeling the trajectory, capturing both the predictive uncertainty and the motion variability. However, similarly to standard Gaussian processes they suffer from a cubic complexity in the number of training points, due to the inversion of the kernel matrix. The uncertainty can be leveraged for more complex learning tasks, such as inferring the variable impedance profile required from a robotic manipulator. However, suitable approximations are needed to make HGPs scalable, at the price of potentially worsening the posterior mean and variance profiles. Motivated by these observations, we study the combination of HGPs and random features, which are a popular, data-independent approximation strategy of kernel functions. In a theoretical analysis, we provide novel guarantees on the approximation error of the HGP posterior due to random features. Moreover, we validate this scalable motion primitive on real robot data, related to the problem of variable impedance learning. In this way, we show that random features offer a viable and theoretically sound alternative for speeding up the trajectory processing, without sacrificing accuracy.
APA
Caldarelli, E., Chatalic, A., Colomé, A., Rosasco, L. & Torras, C.. (2023). Heteroscedastic Gaussian Processes and Random Features: Scalable Motion Primitives with Guarantees. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3010-3029 Available from https://proceedings.mlr.press/v229/caldarelli23a.html.

Related Material