DiffTune$^+$: Hyperparameter-Free Auto-Tuning using Auto-Differentiation

Sheng Cheng, Lin Song, Minkyung Kim, Shenlong Wang, Naira Hovakimyan
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:170-183, 2023.

Abstract

Controller tuning is a vital step to ensure a controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller’s parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin’s car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-cheng23b, title = {DiffTune$^+$: Hyperparameter-Free Auto-Tuning using Auto-Differentiation}, author = {Cheng, Sheng and Song, Lin and Kim, Minkyung and Wang, Shenlong and Hovakimyan, Naira}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {170--183}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/cheng23b/cheng23b.pdf}, url = {https://proceedings.mlr.press/v211/cheng23b.html}, abstract = {Controller tuning is a vital step to ensure a controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller’s parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin’s car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods.} }
Endnote
%0 Conference Paper %T DiffTune$^+$: Hyperparameter-Free Auto-Tuning using Auto-Differentiation %A Sheng Cheng %A Lin Song %A Minkyung Kim %A Shenlong Wang %A Naira Hovakimyan %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-cheng23b %I PMLR %P 170--183 %U https://proceedings.mlr.press/v211/cheng23b.html %V 211 %X Controller tuning is a vital step to ensure a controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller’s parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin’s car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods.
APA
Cheng, S., Song, L., Kim, M., Wang, S. & Hovakimyan, N.. (2023). DiffTune$^+$: Hyperparameter-Free Auto-Tuning using Auto-Differentiation. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:170-183 Available from https://proceedings.mlr.press/v211/cheng23b.html.

Related Material