DeepMPCVS: Deep Model Predictive Control for Visual Servoing

Pushkal Katara, Harish YVS, Harit Pandya, Abhinav Gupta, AadilMehdi Sanchawala, Gourav Kumar, Brojeshwar Bhowmick, Madhava Krishna
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:2006-2015, 2021.

Abstract

The simplicity of the visual servoing approach makes it an attractive option for tasks dealing with vision-based control of robots in many real-world applications. However, attaining precise alignment for unseen environments pose a challenge to existing visual servoing approaches. While classical approaches assume a perfect world, the recent data-driven approaches face issues when generalizing to novel environments. In this paper, we aim to combine the best of both worlds. We present a deep model predictive visual servoing framework that can achieve precise alignment with optimal trajectories and can generalize to novel environments. Our framework consists of a deep network for optical flow predictions, which are used along with a predictive model to forecast future optical flow. For generating an optimal set of velocities we present a control network that can be trained on-the-fly without any supervision. Through extensive simulations on photo-realistic indoor settings of the popular Habitat framework, we show significant performance gain due to the proposed formulation vis-a-vis recent state of the art methods. Specifically, we show vastly improved performance in trajectory length and faster convergence over recent approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-katara21a, title = {DeepMPCVS: Deep Model Predictive Control for Visual Servoing}, author = {Katara, Pushkal and YVS, Harish and Pandya, Harit and Gupta, Abhinav and Sanchawala, AadilMehdi and Kumar, Gourav and Bhowmick, Brojeshwar and Krishna, Madhava}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {2006--2015}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/katara21a/katara21a.pdf}, url = {https://proceedings.mlr.press/v155/katara21a.html}, abstract = {The simplicity of the visual servoing approach makes it an attractive option for tasks dealing with vision-based control of robots in many real-world applications. However, attaining precise alignment for unseen environments pose a challenge to existing visual servoing approaches. While classical approaches assume a perfect world, the recent data-driven approaches face issues when generalizing to novel environments. In this paper, we aim to combine the best of both worlds. We present a deep model predictive visual servoing framework that can achieve precise alignment with optimal trajectories and can generalize to novel environments. Our framework consists of a deep network for optical flow predictions, which are used along with a predictive model to forecast future optical flow. For generating an optimal set of velocities we present a control network that can be trained on-the-fly without any supervision. Through extensive simulations on photo-realistic indoor settings of the popular Habitat framework, we show significant performance gain due to the proposed formulation vis-a-vis recent state of the art methods. Specifically, we show vastly improved performance in trajectory length and faster convergence over recent approaches.} }
Endnote
%0 Conference Paper %T DeepMPCVS: Deep Model Predictive Control for Visual Servoing %A Pushkal Katara %A Harish YVS %A Harit Pandya %A Abhinav Gupta %A AadilMehdi Sanchawala %A Gourav Kumar %A Brojeshwar Bhowmick %A Madhava Krishna %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-katara21a %I PMLR %P 2006--2015 %U https://proceedings.mlr.press/v155/katara21a.html %V 155 %X The simplicity of the visual servoing approach makes it an attractive option for tasks dealing with vision-based control of robots in many real-world applications. However, attaining precise alignment for unseen environments pose a challenge to existing visual servoing approaches. While classical approaches assume a perfect world, the recent data-driven approaches face issues when generalizing to novel environments. In this paper, we aim to combine the best of both worlds. We present a deep model predictive visual servoing framework that can achieve precise alignment with optimal trajectories and can generalize to novel environments. Our framework consists of a deep network for optical flow predictions, which are used along with a predictive model to forecast future optical flow. For generating an optimal set of velocities we present a control network that can be trained on-the-fly without any supervision. Through extensive simulations on photo-realistic indoor settings of the popular Habitat framework, we show significant performance gain due to the proposed formulation vis-a-vis recent state of the art methods. Specifically, we show vastly improved performance in trajectory length and faster convergence over recent approaches.
APA
Katara, P., YVS, H., Pandya, H., Gupta, A., Sanchawala, A., Kumar, G., Bhowmick, B. & Krishna, M.. (2021). DeepMPCVS: Deep Model Predictive Control for Visual Servoing. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:2006-2015 Available from https://proceedings.mlr.press/v155/katara21a.html.

Related Material