- title: 'Preface'
volume: 144
URL: https://proceedings.mlr.press/v144/jadbabaie21a.html
PDF: http://proceedings.mlr.press/v144/jadbabaie21a/jadbabaie21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-jadbabaie21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1-5
id: jadbabaie21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1
lastpage: 5
published: 2021-05-29 00:00:00 +0000
- title: 'On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning'
abstract: 'Model-based reinforcement learning approaches add explicit domain knowledge to agents in hopes of improving the sample-efficiency in comparison to model-free agents. However, in practice model-based methods are unable to achieve the same asymptotic performance on challenging continuous control tasks due to the complexity of learning and controlling an explicit world model. In this paper we investigate the stochastic value gradient (SVG),which is a well-known family of methods for controlling continuous systems which includes model-based approaches that distill a model-based value expansion into a model-free policy. We consider a variant of the model-based SVG that scales to larger systems and uses 1) an entropy regularization to help with exploration,2) a learned deterministic world model to improve the short-horizon value estimate, and 3) a learned model-free value estimate after the model’s rollout. This SVG variation captures the model-free soft actor-critic method as an instance when the model rollout horizon is zero,and otherwise uses short-horizon model rollouts to improve the value estimate for the policy update. We surpass the asymptotic performance of other model-based methods on the proprioceptive MuJoCo locomotion tasks from the OpenAI gym,including a humanoid. We notably achieve these results with a simple deterministic world model without requiring an ensemble.'
volume: 144
URL: https://proceedings.mlr.press/v144/amos21a.html
PDF: http://proceedings.mlr.press/v144/amos21a/amos21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-amos21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Brandon
family: Amos
- given: Samuel
family: Stanton
- given: Denis
family: Yarats
- given: Andrew Gordon
family: Wilson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 6-20
id: amos21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 6
lastpage: 20
published: 2021-05-29 00:00:00 +0000
- title: 'Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning'
abstract: 'A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties. '
volume: 144
URL: https://proceedings.mlr.press/v144/sonar21a.html
PDF: http://proceedings.mlr.press/v144/sonar21a/sonar21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-sonar21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Anoopkumar
family: Sonar
- given: Vincent
family: Pacelli
- given: Anirudha
family: Majumdar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 21-33
id: sonar21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 21
lastpage: 33
published: 2021-05-29 00:00:00 +0000
- title: 'Learning-based State Reconstruction for a Scalar Hyperbolic PDE under noisy Lagrangian Sensing'
abstract: 'The state reconstruction problem of a heterogeneous dynamic system under sporadic measurements is considered. This system consists of a conversation flow together with a multi-agent network modeling particles within the flow. We propose a partial-state reconstruction algorithm using physics-informed learning based on local measurements obtained from these agents. Traffic density reconstruction is used as an example to illustrate the results and it is shown that the approach provides an efficient noise rejection.'
volume: 144
URL: https://proceedings.mlr.press/v144/barreau21a.html
PDF: http://proceedings.mlr.press/v144/barreau21a/barreau21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-barreau21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Matthieu
family: Barreau
- given: John
family: Liu
- given: Karl Henrik
family: Johansson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 34-46
id: barreau21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 34
lastpage: 46
published: 2021-05-29 00:00:00 +0000
- title: 'Nonlinear Two-Time-Scale Stochastic Approximation: Convergence and Finite-Time Performance'
abstract: 'Two-time-scale stochastic approximation, a generalized version of the popular stochastic approximation, has found broad applications in many areas including stochastic control, optimization, and machine learning. Despite of its popularity, theoretical guarantees of this method, especially its finite-time performance, are mostly achieved for the linear case while the results for the nonlinear counterpart are very sparse. Motivated by the classic control theory for singularly perturbed systems, we study in this paper the asymptotic convergence and finite-time analysis of the nonlinear two-time-scale stochastic approximation. Under some fairly standard assumptions, we provide a formula that characterizes the rate of convergence of the main iterates to the desired solutions. In particular, we show that the method achieves a convergence in expectation at a rate O(1/k^{2/3}), where k is the number of iterations. The key idea in our analysis is to properly choose the two step sizes to characterize the coupling between the fast and slow-time-scale iterates.'
volume: 144
URL: https://proceedings.mlr.press/v144/doan21a.html
PDF: http://proceedings.mlr.press/v144/doan21a/doan21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-doan21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Thinh T.
family: Doan
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 47-47
id: doan21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 47
lastpage: 47
published: 2021-05-29 00:00:00 +0000
- title: 'Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions'
abstract: 'In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most O(min{P_T,S_T}), where P_T and S_T are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to O(min{P_T,S_T,V_T}), where V_T is the function variation of the online functions. Note that the quantities of P_T, S_T, V_T essentially reflect different aspects of environmental non-stationarity—they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a \emph{best-of-three-worlds} guarantee, and is strictly tighter than previous results.'
volume: 144
URL: https://proceedings.mlr.press/v144/zhao21a.html
PDF: http://proceedings.mlr.press/v144/zhao21a/zhao21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zhao21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Peng
family: Zhao
- given: Lijun
family: Zhang
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 48-59
id: zhao21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 48
lastpage: 59
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples'
abstract: 'In this work, we study the problem of learning partially observed linear dynamical systems from a single sample trajectory. A major practical challenge in the existing system identification methods is the undesirable dependency of their required sample size on the system dimension: roughly speaking, they presume and rely on sample sizes that scale linearly with the system dimension. Evidently, in high-dimensional regime where the system dimension is large, it may be costly, if not impossible, to collect as many samples from the unknown system. In this paper, we introduce an regularized estimator that can accurately estimate the Markov parameters of the system, provided that the number of samples scale poly-logarithmically with the system dimension. Our result significantly improves the sample complexity of learning partially observed linear dynamical systems: it shows that the Markov parameters of the system can be learned in the high-dimensional setting, where the number of samples is significantly smaller than the system dimension.'
volume: 144
URL: https://proceedings.mlr.press/v144/fattahi21a.html
PDF: http://proceedings.mlr.press/v144/fattahi21a/fattahi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-fattahi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Salar
family: Fattahi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 60-72
id: fattahi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 60
lastpage: 72
published: 2021-05-29 00:00:00 +0000
- title: 'Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-Reinforcement Learning'
abstract: 'There is considerable interest in designing meta-reinforcement learning (meta-RL) algorithms, which enable autonomous agents to adapt new tasks from small amount of experience. In meta-RL, the specification (such as reward function) of current task is hidden from the agent. In addition, states are hidden within each task owing to sensor noise or limitations in realistic environments. Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience. To address this, we propose estimating disentangled belief about task and states, leveraging an inductive bias that the task and states can be regarded as global and local features of each task. Specifically, we train a hierarchical state-space model (HSSM) parameterized by deep neural networks as an environment model, whose global and local latent variables correspond to task and states, respectively. Because the HSSM does not allow analytical computation of posterior distribution, i.e., belief, we employ amortized inference to approximate it. After the belief is obtained, we can augment observations of a model-free policy with the belief to efficiently train the policy. Moreover, because task and state information are factorized and interpretable, the downstream policy training is facilitated compared with the prior methods that did not consider the hierarchical nature. Empirical validations on a GridWorld environment confirm that the HSSM can separate the hidden task and states information. Then, we compare the meta-RL agent with the HSSM to prior meta-RL methods in MuJoCo environments, and confirm that our agent requires less training data and reaches higher final performance.'
volume: 144
URL: https://proceedings.mlr.press/v144/akuzawa21a.html
PDF: http://proceedings.mlr.press/v144/akuzawa21a/akuzawa21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-akuzawa21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Kei
family: Akuzawa
- given: Yusuke
family: Iwasawa
- given: Yutaka
family: Matsuo
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 73-86
id: akuzawa21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 73
lastpage: 86
published: 2021-05-29 00:00:00 +0000
- title: 'The benefits of sharing: a cloud-aided performance-driven framework to learn optimal feedback policies'
abstract: 'Mass-produced self-regulating systems are constructed and calibrated to be nominally the same and have similar goals. When several of them can share information with the cloud, their similarities can be exploited to improve the design of individual control policies. In this multi-agent framework, we aim at exploiting these similarities and the connection to the cloud to solve a sharing-based control policy optimization, so as to leverage on information provided by “trustworthy” agents. In this paper, we propose to combine the optimal policy search method introduced in (Ferrarotti and Bemporad, 2019) with the Alternating Direction Method of Multipliers, by relying on weighted surrogate of the experiences of each device, shared with the cloud. A preliminary example shows the effectiveness of the proposed sharing-based method, that results in improved performance with respect to the ones attained when neglecting the similarities among devices and when enforcing consensus among their policies.'
volume: 144
URL: https://proceedings.mlr.press/v144/ferrarotti21a.html
PDF: http://proceedings.mlr.press/v144/ferrarotti21a/ferrarotti21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ferrarotti21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Laura
family: Ferrarotti
- given: Valentina
family: Breschi
- given: Alberto
family: Bemporad
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 87-98
id: ferrarotti21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 87
lastpage: 98
published: 2021-05-29 00:00:00 +0000
- title: 'Data-driven design of switching reference governors for brake-by-wire applications'
abstract: 'Nowadays, data are ubiquitous in control design and data-driven approaches are in constant evolution. By following such a trend, in this paper we propose an approach for the direct data-driven design of switching reference governors for nonlinear plants and we apply it within a brake-by-wire application. The braking system is assumed to be pre-stabilized via a simple unknown controller attaining unsatisfactory performance in terms of output tracking and actuator effort. Hence, the reference governor is used to improve the overall closed-loop behavior, resulting into safer maneuvering. Preliminary results on a simulation setup show the effectiveness of the proposed strategy, thus motivating further investigation on the topic.'
volume: 144
URL: https://proceedings.mlr.press/v144/sassella21a.html
PDF: http://proceedings.mlr.press/v144/sassella21a/sassella21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-sassella21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Andrea
family: Sassella
- given: Valentina
family: Breschi
- given: Simone
family: Formentin
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 99-110
id: sassella21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 99
lastpage: 110
published: 2021-05-29 00:00:00 +0000
- title: 'Graph Neural Networks for Distributed Linear-Quadratic Control'
abstract: 'The linear-quadratic controller is one of the fundamental problems in control theory. The optimal solution is a linear controller that requires access to the state of the entire system at any given time. When considering a network system, this renders the optimal controller a centralized one. The interconnected nature of a network system often demands a distributed controller, where different components of the system are controlled based only on local information. Unlike the classical centralized case, obtaining the optimal distributed controller is usually an intractable problem. Thus, we adopt a graph neural network (GNN) as a parametrization of distributed controllers. GNNs are naturally local and have distributed architectures, making them well suited for learning nonlinear distributed controllers. By casting the linear-quadratic problem as a self-supervised learning problem, we are able to find the best GNN-based distributed controller. We also derive sufficient conditions for the resulting closed-loop system to be stable. We run extensive simulations to study the performance of GNN-based distributed controllers and showcase that they are a computationally efficient parametrization with scalability and transferability capabilities.'
volume: 144
URL: https://proceedings.mlr.press/v144/gama21a.html
PDF: http://proceedings.mlr.press/v144/gama21a/gama21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-gama21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Fernando
family: Gama
- given: Somayeh
family: Sojoudi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 111-124
id: gama21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 111
lastpage: 124
published: 2021-05-29 00:00:00 +0000
- title: 'Learning to Actively Reduce Memory Requirements for Robot Control Tasks'
abstract: 'Robots equipped with rich sensing modalities (e.g., RGB-D cameras) performing long-horizon tasks motivate the need for policies that are highly memory-efficient. State-of-the-art approaches for controlling robots often use memory representations that are excessively rich for the task or rely on handcrafted tricks for memory efficiency. Instead, this work provides a general approach for jointly synthesizing memory representations and policies; the resulting policies actively seek to reduce memory requirements. Specifically, we present a reinforcement learning framework that leverages an implementation of the group LASSO regularization to synthesize policies that employ low-dimensional and task-centric memory representations. We demonstrate the efficacy of our approach with simulated examples including navigation in discrete and continuous spaces as well as vision-based indoor navigation set in a photo-realistic simulator. The results on these examples indicate that our method is capable of finding policies that rely only on low-dimensional memory representations, improving generalization, and actively reducing memory requirements.'
volume: 144
URL: https://proceedings.mlr.press/v144/booker21a.html
PDF: http://proceedings.mlr.press/v144/booker21a/booker21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-booker21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Meghan
family: Booker
- given: Anirudha
family: Majumdar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 125-137
id: booker21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 125
lastpage: 137
published: 2021-05-29 00:00:00 +0000
- title: 'Non-conservative Design of Robust Tracking Controllers Based on Input-output Data'
abstract: 'This paper studies worst-case robust optimal tracking using noisy input-output data. We utilize behavioral system theory to represent system trajectories, while avoiding explicit system identification. We assume that the recent output data used in the data-dependent representation are noisy and we provide a non-conservative design procedure for robust control based on optimization with a linear cost and LMI constraints. Our methods rely on the parameterization of noise sequences compatible with the data-dependent system representation and on a suitable reformulation of the performance specification, which further enable the application of the S-lemma to derive an LMI optimization problem. The performance of the new controller is discussed through simulations.'
volume: 144
URL: https://proceedings.mlr.press/v144/xu21a.html
PDF: http://proceedings.mlr.press/v144/xu21a/xu21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-xu21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Liang
family: Xu
- given: Mustafa Sahin
family: Turan
- given: Baiwei
family: Guo
- given: Giancarlo
family: Ferrari-Trecate
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 138-149
id: xu21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 138
lastpage: 149
published: 2021-05-29 00:00:00 +0000
- title: 'Optimal Algorithms for Submodular Maximization with Distributed Constraints'
abstract: 'We consider a class of discrete optimization problems that aim to maximize a submodular objective function subject to a distributed partition matroid constraint. More precisely, we consider a networked scenario in which multiple agents choose actions from local strategy sets with the goal of maximizing a submodular objective function defined over the set of all possible actions. Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight (1-1/e) approximation factor of the optimum global solution using only local computation and communication. It is known that a sequential greedy algorithm can only achieve a 1/2 multiplicative approximation of the optimal solution for this class of problems in the distributed setting. Our framework relies on lifting the discrete problem to a continuous domain and developing a consensus algorithm that achieves the tight (1-1/e) approximation guarantee of the global discrete solution once a proper rounding scheme is applied. We also offer empirical results from a multi-agent area coverage problem to show that the proposed method significantly outperforms the state-of-the-art sequential greedy method.'
volume: 144
URL: https://proceedings.mlr.press/v144/robey21a.html
PDF: http://proceedings.mlr.press/v144/robey21a/robey21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-robey21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Alexander
family: Robey
- given: Arman
family: Adibi
- given: Brent
family: Schlotfeldt
- given: Hamed
family: Hassani
- given: George J.
family: Pappas
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 150-162
id: robey21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 150
lastpage: 162
published: 2021-05-29 00:00:00 +0000
- title: 'Data-Driven Reachability Analysis Using Matrix Zonotopes'
abstract: 'In this paper, we propose a data-driven reachability analysis approach for an unknown control system. Reachability analysis is an essential tool for guaranteeing safety properties. However, most current reachability analysis heavily relies on the existence of a suitable system model, which is often not directly available in practice. We instead propose a reachability analysis approach based on noisy data. More specifically, we first provide an algorithm for over-approximating the reachable set of a linear time-invariant system using matrix zonotopes. Then we introduce an extension for nonlinear systems. We provide theoretical guarantees in both cases. Numerical examples show the potential and applicability of the introduced methods.'
volume: 144
URL: https://proceedings.mlr.press/v144/alanwar21a.html
PDF: http://proceedings.mlr.press/v144/alanwar21a/alanwar21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-alanwar21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Amr
family: Alanwar
- given: Anne
family: Koch
- given: Frank
family: Allgöwer
- given: Karl Henrik
family: Johansson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 163-175
id: alanwar21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 163
lastpage: 175
published: 2021-05-29 00:00:00 +0000
- title: 'Learning local modules in dynamic networks'
abstract: 'Over the last decade, the problem of data-driven modeling in linear dynamic networks has been introduced in the literature, and has shown to contain many different challenging research questions. The structural and topological properties of networks become a central ingredient in the data-driven modeling problem, as well as the selection of locations for signals to be sensed and for excitation signals to be added. In this survey-type paper we will present an overview of recent results that are obtained for the problem of learning the dynamics of a single link/module in a dynamic network of which the topology is given. The surveyed methods include extensions of classical identification methods, combined with Bayesian kernel-based methods. Particular attention will be given to the selection of signals that need to be available for measurement/excitation, and accuracy properties of the estimated models in terms of consistency and minimum variance properties.'
volume: 144
URL: https://proceedings.mlr.press/v144/van-den-hof21a.html
PDF: http://proceedings.mlr.press/v144/van-den-hof21a/van-den-hof21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-van-den-hof21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Paul M.J.
family: Van den Hof
- given: Karthik R.
family: Ramaswamy
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 176-188
id: van-den-hof21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 176
lastpage: 188
published: 2021-05-29 00:00:00 +0000
- title: 'Data-Driven System Level Synthesis'
abstract: 'We establish data-driven versions of the System Level Synthesis (SLS) parameterization of stabilizing controllers for linear-time-invariant systems. Inspired by recent work in data-driven control that leverages tools from behavioral theory, we show that optimization problems over system-responses can be posed using only libraries of past system trajectories, without explicitly identifying a system model. We first consider the idealized setting of noise free trajectories, and show an exact equivalence between traditional and data-driven SLS. We then show that in the case of a system driven by process noise, tools from robust SLS can be used to characterize the effects of noise on closed-loop performance. We then draw on tools from matrix concentration to show that a simple trajectory averaging technique can be used to mitigate these effects. We end with numerical experiments showing the soundness of our methods.'
volume: 144
URL: https://proceedings.mlr.press/v144/xue21a.html
PDF: http://proceedings.mlr.press/v144/xue21a/xue21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-xue21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Anton
family: Xue
- given: Nikolai
family: Matni
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 189-200
id: xue21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 189
lastpage: 200
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Approximate Forward Reachable Sets Using Separating Kernels'
abstract: 'We present a data-driven method for computing approximate forward reachable sets using separating kernels in a reproducing kernel Hilbert space. We frame the problem as a support estimation problem, and learn a classifier of the support as an element in a reproducing kernel Hilbert space using a data-driven approach. Kernel methods provide a computationally efficient representation for the classifier that is the solution to a regularized least squares problem. The solution converges almost surely as the sample size increases, and admits known finite sample bounds. This approach is applicable to stochastic systems with arbitrary disturbances and neural network verification problems by treating the network as a dynamical system, or by considering neural network controllers as part of a closed-loop system. We present our technique on several examples, including a spacecraft rendezvous and docking problem, and two nonlinear system benchmarks with neural network controllers.'
volume: 144
URL: https://proceedings.mlr.press/v144/thorpe21a.html
PDF: http://proceedings.mlr.press/v144/thorpe21a/thorpe21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-thorpe21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Adam J.
family: Thorpe
- given: Kendric R.
family: Ortiz
- given: Meeko M. K.
family: Oishi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 201-212
id: thorpe21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 201
lastpage: 212
published: 2021-05-29 00:00:00 +0000
- title: 'On Uninformative Optimal Policies in Adaptive LQR with Unknown B-Matrix'
abstract: 'This paper presents local asymptotic minimax regret lower bounds for adaptive Linear Quadratic Regulators (LQR). We consider affinely parametrized B-matrices and known A-matrices and aim to understand when logarithmic regret is impossible even in the presence of structural side information. After defining the intrinsic notion of an uninformative optimal policy in terms of a singularity condition for Fisher information we obtain local minimax regret lower bounds for such uninformative instances of LQR by appealing to van Trees’ inequality (Bayesian Cramér-Rao) and a representation of regret in terms of a quadratic form (Bellman error). It is shown that if the parametrization induces an uninformative optimal policy, logarithmic regret is impossible and the rate is at least order square root in the time horizon. We explicitly characterize the notion of an uninformative optimal policy in terms of the nullspaces of system-theoretic quantities and the particular instance parametrization.'
volume: 144
URL: https://proceedings.mlr.press/v144/ziemann21a.html
PDF: http://proceedings.mlr.press/v144/ziemann21a/ziemann21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ziemann21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Ingvar
family: Ziemann
- given: Henrik
family: Sandberg
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 213-226
id: ziemann21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 213
lastpage: 226
published: 2021-05-29 00:00:00 +0000
- title: 'Cautious Bayesian Optimization for Efficient and Scalable Policy Search'
abstract: 'Sample efficiency is one of the key factors when applying policy search to real-world problems. In recent years, Bayesian Optimization (BO) has become prominent in the field of robotics due to its sample efficiency and little prior knowledge needed. However, one drawback of BO is its poor performance on high-dimensional search spaces as it focuses on global search. In the policy search setting, local optimization is typically sufficient as initial policies are often available, e.g., via meta-learning, kinesthetic demonstrations or sim-to-real approaches. In this paper, we propose to constrain the policy search space to a sublevel-set of the Bayesian surrogate model’s predictive uncertainty. This simple yet effective way of constraining the policy update enables BO to scale to high-dimensional spaces (>100) as well as reduces the risk of damaging the system. We demonstrate the effectiveness of our approach on a wide range of problems, including a motor skills task, adapting deep RL agents to new reward signals and a sim-to-real task for an inverted pendulum system.'
volume: 144
URL: https://proceedings.mlr.press/v144/frohlich21a.html
PDF: http://proceedings.mlr.press/v144/frohlich21a/frohlich21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-frohlich21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Lukas P.
family: Fröhlich
- given: Melanie N.
family: Zeilinger
- given: Edgar D.
family: Klenske
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 227-240
id: frohlich21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 227
lastpage: 240
published: 2021-05-29 00:00:00 +0000
- title: 'Nonlinear state-space identification using deep encoder networks'
abstract: 'Nonlinear state-space identification for dynamical systems is most often performed by minimizing the simulation error to reduce the effect of model errors. This optimization problem becomes computationally expensive for large datasets. Moreover, the problem is also strongly non-convex, often leading to sub-optimal parameter estimates. This paper introduces a method that approximates the simulation loss by splitting the data set into multiple independent sections similar to the multiple shooting method. This splitting operation allows for the use of stochastic gradient optimization methods which scale well with data set size and has a smoothing effect on the non-convex cost function. The main contribution of this paper is the introduction of an encoder function to estimate the initial state at the start of each section. The encoder function estimates the initial states using a feed-forward neural network starting from historical input and output samples. The efficiency and performance of the proposed state-space encoder method is illustrated on two well-known benchmarks where, for instance, the method achieves the lowest known simulation error on the Wiener–Hammerstein benchmark.'
volume: 144
URL: https://proceedings.mlr.press/v144/beintema21a.html
PDF: http://proceedings.mlr.press/v144/beintema21a/beintema21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-beintema21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Gerben
family: Beintema
- given: Roland
family: Toth
- given: Maarten
family: Schoukens
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 241-250
id: beintema21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 241
lastpage: 250
published: 2021-05-29 00:00:00 +0000
- title: 'Input Convex Neural Networks for Building MPC'
abstract: 'Model Predictive Control in buildings can significantly reduce their energy consumption. The cost and effort necessary for creating and maintaining first principle models for buildings make data- driven modelling an attractive alternative in this domain. In MPC the models form the basis for an optimization problem whose solution provides the control signals to be applied to the system. The fact that this optimization problem has to be solved repeatedly in real-time implies restrictions on the learning architectures that can be used. Here, we adapt Input Convex Neural Networks that are generally only convex for one-step predictions, for use in building MPC. We introduce additional constraints to their structure and weights to achieve a convex input-output relationship for multi- step ahead predictions. We assess the consequences of the additional constraints for the model accuracy and test the models in a real-life MPC experiment in an apartment in Switzerland. In two five-day cooling experiments, MPC with Input Convex Neural Networks is able to keep room temperatures within comfort constraints while minimizing cooling energy consumption.'
volume: 144
URL: https://proceedings.mlr.press/v144/bunning21a.html
PDF: http://proceedings.mlr.press/v144/bunning21a/bunning21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-bunning21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Felix
family: Bünning
- given: Adrian
family: Schalbetter
- given: Ahmed
family: Aboudonia
- given: Mathias Hudoba
prefix: de
family: Badyn
- given: Philipp
family: Heer
- given: John
family: Lygeros
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 251-262
id: bunning21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 251
lastpage: 262
published: 2021-05-29 00:00:00 +0000
- title: 'Abstraction-based branch and bound approach to Q-learning for hybrid optimal control'
abstract: 'In this paper, we design a theoretical framework allowing to apply model predictive control on hybrid systems. For this, we develop a theory of approximate dynamic programming by leveraging the concept of alternating simulation. We show how to combine these notions in a branch and bound algorithm that can further refine the Q-functions using Lagrangian duality. We illustrate the approach on a numerical example.'
volume: 144
URL: https://proceedings.mlr.press/v144/legat21a.html
PDF: http://proceedings.mlr.press/v144/legat21a/legat21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-legat21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Benoît
family: Legat
- given: Raphaël M.
family: Jungers
- given: Jean
family: Bouchat
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 263-274
id: legat21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 263
lastpage: 274
published: 2021-05-29 00:00:00 +0000
- title: 'A unified framework for Hamiltonian deep neural networks'
abstract: 'Training deep neural networks (DNNs) can be difficult due to the occurrence of vanishing/exploding gradients during weight optimization. To avoid this problem, we propose a class of DNNs stemming from the time discretization of Hamiltonian systems. The time-invariant version of Hamiltonian models enjoys marginal stability, a property that, as shown in previous studies, can eliminate convergence to zero or divergence of gradients. In the present paper, we formally show this feature by deriving and analysing the backward gradient dynamics in continuous time. The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures. The good performance of the novel DNNs is demonstrated on benchmark classification problems, including digit recognition using the MNIST dataset.'
volume: 144
URL: https://proceedings.mlr.press/v144/galimberti21a.html
PDF: http://proceedings.mlr.press/v144/galimberti21a/galimberti21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-galimberti21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Clara Lucía
family: Galimberti
- given: Liang
family: Xu
- given: Giancarlo Ferrari
family: Trecate
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 275-286
id: galimberti21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 275
lastpage: 286
published: 2021-05-29 00:00:00 +0000
- title: 'Data-Driven Controller Design via Finite-Horizon Dissipativity'
abstract: 'Given a single measured trajectory of a discrete-time linear time-invariant system, we present a framework for data-driven controller design for closed-loop finite-horizon dissipativity. First we parametrize all closed-loop trajectories using the given data of the plant and a model of the controller. We then provide an approach to validate the controller by verifying closed-loop dissipativity in the standard feedback loop based on this parametrization. The developed conditions allow us to state the corresponding controller synthesis problem as a quadratic matrix inequality feasibility problem. Hence, we obtain purely data-driven synthesis conditions leading to a desired closed-loop dissipativity property. Finally, the results are illustrated with a simulation example.'
volume: 144
URL: https://proceedings.mlr.press/v144/wieler21a.html
PDF: http://proceedings.mlr.press/v144/wieler21a/wieler21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-wieler21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Nils
family: Wieler
- given: Julian
family: Berberich
- given: Anne
family: Koch
- given: Frank
family: Allgöwer
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 287-298
id: wieler21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 287
lastpage: 298
published: 2021-05-29 00:00:00 +0000
- title: 'Safe Bayesian Optimisation for Controller Design by Utilising the Parameter Space Approach'
abstract: 'As control systems become more and more complex, the optimal tuning of control parameters using Bayesian Optimisation gained an increased interest of research in recent years. Safe Bayesian Optimisation, tries to prevent sampling of unsafe parametrizations and therefore allow parameter tuning in real world experiments. Usually this is achieved by approximating a safe set using probabilistic GPR-predictions. In contrast in this work, analytical knowledge about robustly stable parameter configurations is gained by the parameter space approach and then incorporated within the optimisation as constraint. Simulation results on a linear system with uncertain parameters show a significant performance gain compared to standard approaches. .'
volume: 144
URL: https://proceedings.mlr.press/v144/dorschel21a.html
PDF: http://proceedings.mlr.press/v144/dorschel21a/dorschel21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-dorschel21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Lorenz
family: Dörschel
- given: David
family: Stenger
- given: Dirk
family: Abel
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 299-311
id: dorschel21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 299
lastpage: 311
published: 2021-05-29 00:00:00 +0000
- title: 'Tight sampling and discarding bounds for scenario programs with an arbitrary number of removed samples'
abstract: 'The so-called scenario approach offers an efficient framework to address uncertain optimisation problems with uncertainty represented by means of scenarios. The sampling-and-discarding approach within the scenario approach literature allows the decision maker to trade feasibility to performance. We focus on a removal scheme composed by a cascade of scenario programs that removes at each stage a superset of the support set associated to the optimal solution of each of these programs. This particular removal scheme yields a scenario solution with tight guarantees on the probability of constraint violation; however, existing analysis restricts the number of discarded scenarios to be a multiple of the dimension of the optimisation problem. Motivated by this fact, this paper presents pathways to extend the theoretical analysis of this removal scheme. We first provide an extension for a restricted class of scenarios programs for which tight bounds can be obtained, and then we provide a conservative bound on the probability of constraint violation that is valid for any scenario program and an arbitrary number of removed scenarios, which is, however, not tight.'
volume: 144
URL: https://proceedings.mlr.press/v144/romao21a.html
PDF: http://proceedings.mlr.press/v144/romao21a/romao21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-romao21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Licio
family: Romao
- given: Kostas
family: Margellos
- given: Antonis
family: Papachristodoulou
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 312-323
id: romao21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 312
lastpage: 323
published: 2021-05-29 00:00:00 +0000
- title: 'Probabilistic robust linear quadratic regulators with Gaussian processes'
abstract: 'Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design. While learning-based control has the potential to yield superior performance in demanding applications, robustness to uncertainty remains an important challenge. Since Bayesian methods quantify uncertainty of the learning results, it is natural to incorporate these uncertainties in a robust design. In contrast to most state-of-the-art approaches that consider worst-case estimates, we leverage the learning methods’ posterior distribution in the controller synthesis. The result is a more informed and thus efficient trade-off between performance and robustness. We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin. The formulation is based on a recently proposed algorithm for linear quadratic control synthesis, which we extend by giving probabilistic robustness guarantees in the form of credibility bounds for the system’s stability. Comparisons to existing methods based on worst-case and certainty-equivalence designs reveal superior performance and robustness properties of the proposed method.'
volume: 144
URL: https://proceedings.mlr.press/v144/rohr21a.html
PDF: http://proceedings.mlr.press/v144/rohr21a/rohr21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-rohr21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Alexander
prefix: von
family: Rohr
- given: Matthias
family: Neumann-Brosig
- given: Sebastian
family: Trimpe
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 324-335
id: rohr21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 324
lastpage: 335
published: 2021-05-29 00:00:00 +0000
- title: 'Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks'
abstract: 'This paper focuses on finding reinforcement learning policies for control systems with hard state and action constraints. Despite its success in many domains, reinforcement learning is challenging to apply to problems with hard constraints, especially if both the state variables and actions are constrained. Previous works seeking to ensure constraint satisfaction, or safety, have focused on adding a projection step to the policy during learning. Yet, this approach requires solving an optimization problem at every policy execution step, which can lead to significant computational costs and has no safety guarantee with the projection step removed after training. To tackle this problem, this paper proposes a new approach, termed Vertex Networks (VNs), with guarantees on safety during both the exploration and execution stage, by incorporating the safety constraints into the policy network architecture. Leveraging the geometric property that all points within a convex set can be represented as the convex combination of its vertices, the proposed algorithm first learns the convex combination weights and then uses these weights along with the pre-calculated vertices to output an action. The output action is guaranteed to be safe by construction. Numerical examples illustrate that the proposed VN algorithm outperforms projection-based reinforcement learning methods.'
volume: 144
URL: https://proceedings.mlr.press/v144/zheng21a.html
PDF: http://proceedings.mlr.press/v144/zheng21a/zheng21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zheng21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Liyuan
family: Zheng
- given: Yuanyuan
family: Shi
- given: Lillian J.
family: Ratliff
- given: Baosen
family: Zhang
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 336-347
id: zheng21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 336
lastpage: 347
published: 2021-05-29 00:00:00 +0000
- title: 'Sequential Topological Representations for Predictive Models of Deformable Objects'
abstract: 'Deformable objects present a formidable challenge for robotic manipulation due to the lack of canonical low-dimensional representations and the difficulty of capturing, predicting, and controlling such objects. We construct compact topological representations to capture the state of highly deformable objects that are topologically nontrivial. We develop an approach that tracks the evolution of this topological state through time. Under several mild assumptions, we prove that the topology of the scene and its evolution can be recovered from point clouds representing the scene. Our further contribution is a method to learn predictive models that take a sequence of past point cloud observations as input and predict a sequence of topological states, conditioned on target/future control actions. Our experiments with highly deformable objects in simulation show that the proposed multistep predictive models yield more precise results that those obtained from computational topology libraries. These models can leverage patterns inferred across various objects and offer fast multistep predictions suitable for real-time applications.'
volume: 144
URL: https://proceedings.mlr.press/v144/antonova21a.html
PDF: http://proceedings.mlr.press/v144/antonova21a/antonova21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-antonova21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Rika
family: Antonova
- given: Anastasia
family: Varava
- given: Peiyang
family: Shi
- given: J. Frederico
family: Carvalho
- given: Danica
family: Kragic
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 348-360
id: antonova21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 348
lastpage: 360
published: 2021-05-29 00:00:00 +0000
- title: 'Robust error bounds for quantised and pruned neural networks'
abstract: 'A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.'
volume: 144
URL: https://proceedings.mlr.press/v144/li21a.html
PDF: http://proceedings.mlr.press/v144/li21a/li21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-li21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Jiaqi
family: Li
- given: Ross
family: Drummond
- given: Stephen R.
family: Duncan
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 361-372
id: li21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 361
lastpage: 372
published: 2021-05-29 00:00:00 +0000
- title: 'The Dynamics of Gradient Descent for Overparametrized Neural Networks'
abstract: 'We consider the dynamics of gradient descent (GD) in overparameterized single hidden layer neural networks with a squared loss function. Recently, it has been shown that, under some conditions, the parameter values obtained using GD achieve zero training error and generalize well if the initial conditions are chosen appropriately. Here, through a Lyapunov analysis, we show that the dynamics of neural network weights under GD converge to a point which is close to the minimum norm solution subject to the condition that there is no training error when using the linear approximation to the neural network.'
volume: 144
URL: https://proceedings.mlr.press/v144/satpathi21a.html
PDF: http://proceedings.mlr.press/v144/satpathi21a/satpathi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-satpathi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Siddhartha
family: Satpathi
- given: R
family: Srikant
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 373-384
id: satpathi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 373
lastpage: 384
published: 2021-05-29 00:00:00 +0000
- title: 'Bridging Physics-based and Data-driven modeling for Learning Dynamical Systems'
abstract: 'How can we learn a dynamical system to make forecasts, when some variables are unobserved? For instance, in COVID-19, we want to forecast the number of infected patients and death cases but we do not know the count of susceptible and exposed people. How to proceed? While mechanics compartment models are widely-used in epidemic modeling, data-driven models are emerging for disease forecasting. As a case study, we compare these two types of models for COVID-19 forecasting and notice that physics-based models significantly outperform deep learning models. We present a hybrid approach, AutoODE-COVID, which combines a novel compartmental model with automatic differentiation. Our method obtains a 57.4% reduction in mean absolute errors of the 7-day ahead COVID-19 trajectories prediction compared with the best deep learning competitor. To understand the inferior performance of deep learning, we investigate the generalization problem in forecasting. Through systematic experiments, we found that deep learning models fail to forecast under shifted distributions either in the data and parameter domains of dynamical systems. This calls attention to rethink generalization especially for learning dynamical systems.'
volume: 144
URL: https://proceedings.mlr.press/v144/wang21a.html
PDF: http://proceedings.mlr.press/v144/wang21a/wang21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-wang21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Rui
family: Wang
- given: Danielle
family: Maddix
- given: Christos
family: Faloutsos
- given: Yuyang
family: Wang
- given: Rose
family: Yu
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 385-398
id: wang21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 385
lastpage: 398
published: 2021-05-29 00:00:00 +0000
- title: 'Certainty Equivalent Perception-Based Control'
abstract: 'In order to certify performance and safety, feedback control requires precise characterization of sensor errors. In this paper, we provide guarantees on such feedback systems when sensors are characterized by solving a supervised learning problem. We show a uniform error bound on nonparametric kernel regression under a dynamically-achievable dense sampling scheme. This allows for a finite-time convergence rate on the sub-optimality of using the regressor in closed-loop for waypoint tracking. We demonstrate our results in simulation with simplified unmanned aerial vehicle and autonomous driving examples.'
volume: 144
URL: https://proceedings.mlr.press/v144/dean21a.html
PDF: http://proceedings.mlr.press/v144/dean21a/dean21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-dean21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Sarah
family: Dean
- given: Benjamin
family: Recht
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 399-411
id: dean21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 399
lastpage: 411
published: 2021-05-29 00:00:00 +0000
- title: 'When to stop value iteration: stability and near-optimality versus computation'
abstract: ' Value iteration (VI) is a ubiquitous algorithm for optimal control, planning, and reinforcement learning schemes. Under the right assumptions, VI is a vital tool to generate inputs with desirable properties for the controlled system, like optimality and Lyapunov stability. As VI usually requires an infinite number of iterations to solve general nonlinear optimal control problems, a key question is when to terminate the algorithm to produce a “good” solution, with a measurable impact on optimality and stability guarantees. By carefully analysing VI under general stabilizability and detectability properties, we provide explicit and novel relationships of the stopping criterion’s impact on near-optimality, stability and performance, thus allowing to tune these desirable properties against the induced computational cost. The considered class of stopping criteria encompasses those encountered in the control, dynamic programming and reinforcement learning literature and it allows considering new ones, which may be useful to further reduce the computational cost while endowing and satisfying stability and near-optimality properties. We therefore lay a foundation to endow machine learning schemes based on VI with stability and performance guarantees, while reducing computational complexity.'
volume: 144
URL: https://proceedings.mlr.press/v144/granzotto21a.html
PDF: http://proceedings.mlr.press/v144/granzotto21a/granzotto21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-granzotto21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Mathieu
family: Granzotto
- given: Romain
family: Postoyan
- given: Dragan
family: Nešić
- given: Lucian
family: Buşoniu
- given: Jamal
family: Daafouz
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 412-424
id: granzotto21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 412
lastpage: 424
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Recurrent Neural Net Models of Nonlinear Systems'
abstract: 'We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.'
volume: 144
URL: https://proceedings.mlr.press/v144/hanson21a.html
PDF: http://proceedings.mlr.press/v144/hanson21a/hanson21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-hanson21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Joshua
family: Hanson
- given: Maxim
family: Raginsky
- given: Eduardo
family: Sontag
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 425-435
id: hanson21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 425
lastpage: 435
published: 2021-05-29 00:00:00 +0000
- title: 'A Data Driven, Convex Optimization Approach to Learning Koopman Operators'
abstract: 'Koopman operators provide tractable means of learning linear approximations of non-linear dynamics. Many approaches have been proposed to find these operators, typically based upon approximations using an a-priori fixed class of models. However, choosing appropriate models and bounding the approximation error is far from trivial. Motivated by these difficulties, in this paper we propose an optimization based approach to learning Koopman operators from data. Our results show that the Koopman operator, the associated Hilbert space of observables and a suitable dictionary can be obtained by solving two rank-constrained semi-definite programs (SDP). While in principle these problems are NP-hard, the use of standard relaxations of rank leads to convex SDPs.'
volume: 144
URL: https://proceedings.mlr.press/v144/sznaier21a.html
PDF: http://proceedings.mlr.press/v144/sznaier21a/sznaier21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-sznaier21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Mario
family: Sznaier
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 436-446
id: sznaier21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 436
lastpage: 446
published: 2021-05-29 00:00:00 +0000
- title: 'Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning'
abstract: 'This paper considers the multi-agent distributed linear least-squares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents’ goal is to compute a linear model that best fits the collective data points observed by all the agents. In the server-based distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative pre-conditioning to the stochastic settings, where the server updates the estimate and the iterative pre-conditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Pre-conditioned Stochastic Gradient-descent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method’s convergence rate compares favorably to prominent stochastic algorithms for solving the linear least-squares problem in server-based networks.'
volume: 144
URL: https://proceedings.mlr.press/v144/chakrabarti21a.html
PDF: http://proceedings.mlr.press/v144/chakrabarti21a/chakrabarti21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-chakrabarti21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Kushal
family: Chakrabarti
- given: Nirupam
family: Gupta
- given: Nikhil
family: Chopra
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 447-458
id: chakrabarti21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 447
lastpage: 458
published: 2021-05-29 00:00:00 +0000
- title: 'Neural Lyapunov Redesign'
abstract: 'Learning controllers merely based on a performance metric has been proven effective in many physical and non-physical tasks in both control theory and reinforcement learning. However, in practice, the controller must guarantee some notion of safety to ensure that it does not harm either the agent or the environment. Stability is a crucial notion of safety, whose violation can certainly cause unsafe behaviors. Lyapunov functions are effective tools to assess stability in nonlinear dynamical systems. In this paper, we combine an improving Lyapunov function with automatic controller synthesis in an iterative fashion to obtain control policies with large safe regions. We propose a two-player collaborative algorithm that alternates between estimating a Lyapunov function and deriving a controller that gradually enlarges the stability region of the closed-loop system. We provide theoretical results on the class of systems that can be treated with the proposed algorithm and empirically evaluate the effectiveness of our method using an exemplary dynamical system.'
volume: 144
URL: https://proceedings.mlr.press/v144/mehrjou21a.html
PDF: http://proceedings.mlr.press/v144/mehrjou21a/mehrjou21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-mehrjou21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Arash
family: Mehrjou
- given: Mohammad
family: Ghavamzadeh
- given: Bernhard
family: Schölkopf
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 459-470
id: mehrjou21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 459
lastpage: 470
published: 2021-05-29 00:00:00 +0000
- title: 'Regret Bounds for Adaptive Nonlinear Control'
abstract: 'We study the problem of adaptively controlling a known discrete-time nonlinear system subject to unmodeled disturbances. We prove the first finite-time regret bounds for adaptive nonlinear control with matched uncertainty in the stochastic setting, showing that the regret suffered by certainty equivalence adaptive control, compared to an oracle controller with perfect knowledge of the un-modeled disturbances, is upper bounded by $\widetilde{O}(\sqrt{T})$ in expectation. Furthermore, we show that when the input is subject to a k timestep delay, the regret degrades to $\widetilde{O}(k\sqrt{T})$. Our analysis draws connections between classical stability notions in nonlinear control theory (Lyapunov stability and contraction theory) and modern regret analysis from online convex optimization. The use of stability theory allows us to analyze the challenging infinite-horizon single trajectory setting.'
volume: 144
URL: https://proceedings.mlr.press/v144/boffi21a.html
PDF: http://proceedings.mlr.press/v144/boffi21a/boffi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-boffi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Nicholas M.
family: Boffi
- given: Stephen
family: Tu
- given: Jean-Jacques E.
family: Slotine
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 471-483
id: boffi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 471
lastpage: 483
published: 2021-05-29 00:00:00 +0000
- title: 'Self-Supervised Learning of Long-Horizon Manipulation Tasks with Finite-State Task Machines'
abstract: 'We consider the problem of a robot learning to manipulate unknown objects while using them to perform a complex task that is composed of several sub-tasks. The robot receives 6D poses of the objects along with their semantic labels, and executes nonprehensile actions on them. The robot does not receive any feedback regarding the task until the end of an episode, where a binary reward indicates success or failure in performing the task. Moreover, certain attributes of objects cannot be always observed, so the robot needs to learn to remember pertinent past actions that it executed. We propose to solve this problem by simultaneously learning a low-level control policy and a high-level finite-state task machine that keeps track of the progress made by the robot in solving the various sub-tasks and guides the low-level policy. Several experiments in simulation clearly show that the proposed approach is efficient at solving complex robotic tasks without any supervision.'
volume: 144
URL: https://proceedings.mlr.press/v144/liang21a.html
PDF: http://proceedings.mlr.press/v144/liang21a/liang21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-liang21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Junchi
family: Liang
- given: Abdeslam
family: Boularias
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 484-497
id: liang21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 484
lastpage: 497
published: 2021-05-29 00:00:00 +0000
- title: 'Safely Learning Dynamical Systems from Short Trajectories'
abstract: 'A fundamental challenge in learning to control an unknown dynamical system is to reduce model uncertainty by making measurements while maintaining safety. In this work, we formulate a mathematical definition of what it means to safely learn a dynamical system by sequentially deciding where to initialize the next trajectory. In our framework, the state of the system is required to stay within a given safety region under the (possibly repeated) action of all dynamical systems that are consistent with the information gathered so far. For our first two results, we consider the setting of safely learning linear dynamics. We present a linear programming-based algorithm that either safely recovers the true dynamics from trajectories of length one, or certifies that safe learning is impossible. We also give an efficient semidefinite representation of the set of initial conditions whose resulting trajectories of length two are guaranteed to stay in the safety region. For our final result, we study the problem of safely learning a nonlinear dynamical system. We give a second-order cone programming based representation of the set of initial conditions that are guaranteed to remain in the safety region after one application of the system dynamics. '
volume: 144
URL: https://proceedings.mlr.press/v144/ahmadi21a.html
PDF: http://proceedings.mlr.press/v144/ahmadi21a/ahmadi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ahmadi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Amir Ali
family: Ahmadi
- given: Abraar
family: Chaudhry
- given: Vikas
family: Sindhwani
- given: Stephen
family: Tu
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 498-509
id: ahmadi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 498
lastpage: 509
published: 2021-05-29 00:00:00 +0000
- title: 'Adaptive Risk Sensitive Model Predictive Control with Stochastic Search'
abstract: 'We present a general framework for optimizing the Conditional Value-at-Risk for dynamical systems using stochastic search. The framework is capable of handling the uncertainty from the initial condition, stochastic dynamics, and uncertain parameters in the model. The algorithm is compared against a risk-sensitive distributional reinforcement learning framework and demonstrates improved performance on a pendulum and cartpole with stochastic dynamics. We also showcase the applicability of the framework to robotics as an adaptive risk-sensitive controller by optimizing with respect to the fully nonlinear belief provided by a particle filter on a pendulum, cartpole, and quadcopter in simulation.'
volume: 144
URL: https://proceedings.mlr.press/v144/wang21b.html
PDF: http://proceedings.mlr.press/v144/wang21b/wang21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-wang21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Ziyi
family: Wang
- given: Oswin
family: So
- given: Keuntaek
family: Lee
- given: Evangelos A.
family: Theodorou
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 510-522
id: wang21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 510
lastpage: 522
published: 2021-05-29 00:00:00 +0000
- title: 'Nonlinear Data-Enabled Prediction and Control'
abstract: 'Behavioral theory, which characterizes linear dynamics with measured trajectories, has found successful applications in controller design and signal processing. However, the extension of behavioral theory to general nonlinear system remains an open question. In this work, we propose to apply behavioral theory to a reproducing kernel Hilbert space in order to extend its application to a class of nonlinear systems and we show its application in prediction and in predictive control.'
volume: 144
URL: https://proceedings.mlr.press/v144/lian21a.html
PDF: http://proceedings.mlr.press/v144/lian21a/lian21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-lian21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yingzhao
family: Lian
- given: Colin N.
family: Jones
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 523-534
id: lian21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 523
lastpage: 534
published: 2021-05-29 00:00:00 +0000
- title: 'Learning-based feedforward augmentation for steady state rejection of residual dynamics on a nanometer-accurate planar actuator system'
abstract: 'Growing demands in the semiconductor industry result in the need for enhanced performance of lithographic equipment. However, position tracking accuracy of high precision mechatronics is often limited by the presence of disturbance sources, which originate from unmodelled or unforeseen deterministic environmental effects. To negate the effects of these disturbances, a learning based feedforward controller is employed, where the underlying control policy is estimated from experimental data based on Gaussian Process regression. The proposed approach exploits the property of including prior knowledge on the expected steady state behaviour of residual dynamics in terms of kernel selection. Corresponding hyper-parameters are optimized using the maximization of the marginalized likelihood. Consequently, the learned function is employed as augmentation of the currently employed rigid body feedforward controller. The effectiveness of the augmentation is experimentally validated on a magnetically levitated planar motor stage. The results of this paper highlight the benefits and possibilities of machine-learning based approaches for compensation of static effects, which originate from residual dynamics, such that position tracking performance for moving-magnet planar motor actuators is improved.'
volume: 144
URL: https://proceedings.mlr.press/v144/proimadis21a.html
PDF: http://proceedings.mlr.press/v144/proimadis21a/proimadis21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-proimadis21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Ioannis
family: Proimadis
- given: Yorick
family: Broens
- given: Roland
family: Tóth
- given: Hans
family: Butler
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 535-546
id: proimadis21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 535
lastpage: 546
published: 2021-05-29 00:00:00 +0000
- title: 'Suboptimal coverings for continuous spaces of control tasks'
abstract: 'We propose the α-suboptimal covering number to characterize multi-task control problems where the set of dynamical systems and/or cost functions is infinite, analogous to the cardinality of finite task sets. This notion may help quantify the function class expressiveness needed to represent a good multi-task policy, which is important for learning-based control methods that use parameterized function approximation. We study suboptimal covering numbers for linear dynamical systems with quadratic cost (LQR problems) and construct a class of multi-task LQR problems amenable to analysis. For the scalar case, we show logarithmic dependence on the "breadth" of the space. For the matrix case, we present experiments 1) measuring the efficiency of a particular constructive cover, and 2) visualizing the behavior of two candidate systems for the lower bound.'
volume: 144
URL: https://proceedings.mlr.press/v144/preiss21a.html
PDF: http://proceedings.mlr.press/v144/preiss21a/preiss21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-preiss21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: James A.
family: Preiss
- given: Gaurav S.
family: Sukhatme
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 547-558
id: preiss21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 547
lastpage: 558
published: 2021-05-29 00:00:00 +0000
- title: 'Sample Complexity of Linear Quadratic Gaussian (LQG) Control for Output Feedback Systems'
abstract: 'This paper studies a class of partially observed Linear Quadratic Gaussian (LQG) problems with unknown dynamics. We establish an end-to-end sample complexity bound on learning a robust LQG controller for open-loop stable plants. This is achieved using a robust synthesis procedure, where we first estimate a model from a single input-output trajectory of finite length, identify an H-infinity bound on the estimation error, and then design a robust controller using the estimated model and its quantified uncertainty. Our synthesis procedure leverages a recent control tool called Input-Output Parameterization (IOP) that enables robust controller design using convex optimization. For open-loop stable systems, we prove that the LQG performance degrades linearly with respect to the model estimation error using the proposed synthesis procedure. Despite the hidden states in the LQG problem, the achieved scaling matches previous results on learning Linear Quadratic Regulator (LQR) controllers with full state observations.'
volume: 144
URL: https://proceedings.mlr.press/v144/zheng21b.html
PDF: http://proceedings.mlr.press/v144/zheng21b/zheng21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zheng21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yang
family: Zheng
- given: Luca
family: Furieri
- given: Maryam
family: Kamgarpour
- given: Na
family: Li
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 559-570
id: zheng21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 559
lastpage: 570
published: 2021-05-29 00:00:00 +0000
- title: 'Chance-constrained quasi-convex optimization with application to data-driven switched systems control'
abstract: 'We study quasi-convex optimization problems, where only a subset of the constraints can be sampled, and yet one would like a probabilistic guarantee on the obtained solution with respect to the initial (unknown) optimization problem. Even though our results are partly applicable to general quasi-convex problems, in this work we introduce and study a particular subclass, which we call "quasi-linear problems". We provide optimality conditions for these problems. Thriving on this, we extend the approach of chance-constrained convex optimization to quasi-linear optimization problems. Finally, we show that this approach is useful for the stability analysis of black-box switched linear systems, from a finite set of sampled trajectories. It allows us to compute probabilistic upper bounds on the JSR of a large class of switched linear systems.'
volume: 144
URL: https://proceedings.mlr.press/v144/berger21a.html
PDF: http://proceedings.mlr.press/v144/berger21a/berger21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-berger21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Guillaume O.
family: Berger
- given: Raphaël M.
family: Jungers
- given: Zheming
family: Wang
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 571-583
id: berger21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 571
lastpage: 583
published: 2021-05-29 00:00:00 +0000
- title: 'Control of Unknown (Linear) Systems with Receding Horizon Learning'
abstract: 'A receding horizon learning scheme is proposed to transfer the state of a discrete-time dynamical control system to zero without the need of a system model. Global state convergence to zero is proved for the class of stabilizable and detectable linear time-invariant systems, assuming that only input and output data is available and an upper bound of the state dimension is known. The proposed scheme consists of a receding horizon control scheme and a proximity-based estimation scheme to estimate and control the closed-loop trajectory'
volume: 144
URL: https://proceedings.mlr.press/v144/ebenbauer21a.html
PDF: http://proceedings.mlr.press/v144/ebenbauer21a/ebenbauer21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ebenbauer21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Christian
family: Ebenbauer
- given: Fabian
family: Pfitz
- given: Shuyou
family: Yu
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 584-596
id: ebenbauer21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 584
lastpage: 596
published: 2021-05-29 00:00:00 +0000
- title: 'Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems'
abstract: ' We study the infinite-horizon zero-sum linear quadratic (LQ) games, where the state transition is linear and the cost function is quadratic in states and actions of two players. In particular, we develop an adaptive algorithm that can properly trade off between exploration and exploitation of the unknown environment in LQ games based on the optimism-in-face-of-uncertainty (OFU) principle. We show that (i) the average regret of player $1$ (the min player) can be bounded by $\widetilde{\mathcal{O}}(1/\sqrt{T})$ against any fixed linear policy of the adversary (player $2$); (ii) the average cost of player $1$ also converges to the value of the game at a sublinear $\widetilde{\mathcal{O}}(1/\sqrt{T})$ rate if the adversary plays adaptively against player $1$ with the same algorithm, i.e., with self-play. To the best of our knowledge, this is the first time that a probably sample efficient reinforcement learning algorithm is proposed for zero-sum LQ games.'
volume: 144
URL: https://proceedings.mlr.press/v144/zhang21a.html
PDF: http://proceedings.mlr.press/v144/zhang21a/zhang21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zhang21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Jingwei
family: Zhang
- given: Zhuoran
family: Yang
- given: Zhengyuan
family: Zhou
- given: Zhaoran
family: Wang
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 597-598
id: zhang21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 597
lastpage: 598
published: 2021-05-29 00:00:00 +0000
- title: 'Analysis of the Optimization Landscape of Linear Quadratic Gaussian (LQG) Control'
abstract: 'This paper revisits the classical Linear Quadratic Gaussian (LQG) control from a modern optimization perspective. We analyze two aspects of the optimization landscape of the LQG problem: 1) connectivity of the set of stabilizing controllers $\mathcal{C}_n$; and 2) structure of stationary points. It is known that similarity transformations do not change the input-output behavior of a dynamical controller or LQG cost. This inherent symmetry by similarity transformations makes the landscape of LQG very rich. We show that 1) the set of stabilizing controllers $\mathcal{C}_n$ has at most two path-connected components and they are diffeomorphic under a mapping defined by a similarity transformation; 2) there might exist many \emph{strictly suboptimal stationary points} of the LQG cost function over $\mathcal{C}_n$ and these stationary points are always \emph{non-minimal}; 3) all \emph{minimal} stationary points are globally optimal and they are identical up to a similarity transformation. These results shed some light on the performance analysis of direct policy gradient methods for solving the LQG problem.'
volume: 144
URL: https://proceedings.mlr.press/v144/tang21a.html
PDF: http://proceedings.mlr.press/v144/tang21a/tang21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-tang21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yujie
family: Tang
- given: Yang
family: Zheng
- given: Na
family: Li
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 599-610
id: tang21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 599
lastpage: 610
published: 2021-05-29 00:00:00 +0000
- title: 'Physics-penalised Regularisation for Learning Dynamics Models with Contact'
abstract: 'Robotic systems, such as legged robots and manipulators, often handle states which involve ground impact or interaction with objects present in their surroundings; both of which are physically driven by contact. Dynamics model learning tends to focus on continuous motion, yielding poor results when deployed on real systems exposed to non-smooth frictional discontinuities. Inspired by a recent promising direction in machine learning, in this work we present a novel method for learning dynamics models undergoing contact by augmenting data-driven deep models with physics-penalised regularisation. Precisely, this paper conceptually formalises a novel framework for using an impenetrability component in the physics-based loss function directly within the learning objective of neural networks. Our results demonstrate that our method shows superior performance to using normal deep models for learning non-smooth dynamics models of robotic manipulators, strengthening their potential for deployment in contact-rich environments.'
volume: 144
URL: https://proceedings.mlr.press/v144/pizzuto21a.html
PDF: http://proceedings.mlr.press/v144/pizzuto21a/pizzuto21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-pizzuto21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Gabriella
family: Pizzuto
- given: Michael
family: Mistry
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 611-622
id: pizzuto21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 611
lastpage: 622
published: 2021-05-29 00:00:00 +0000
- title: 'The Impact of Data on the Stability of Learning-Based Control'
abstract: 'Despite the existence of formal guarantees for learning-based control approaches, the relationship between data and control performance is still poorly understood. In this paper, we present a measure to quantify the value of data within the context of a predefined control task. Our approach is applicable to a wide variety of unknown nonlinear systems that are to be controlled by a generic learning-based control law. We model the unknown component of the system using Gaussian processes, which in turn allows us to directly assess the impact of model uncertainty on control. Results obtained in numerical simulations indicate the efficacy of the proposed measure.'
volume: 144
URL: https://proceedings.mlr.press/v144/lederer21a.html
PDF: http://proceedings.mlr.press/v144/lederer21a/lederer21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-lederer21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Armin
family: Lederer
- given: Alexandre
family: Capone
- given: Thomas
family: Beckers
- given: Jonas
family: Umlauft
- given: Sandra
family: Hirche
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 623-635
id: lederer21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 623
lastpage: 635
published: 2021-05-29 00:00:00 +0000
- title: 'Accelerated Learning with Robustness to Adversarial Regressors'
abstract: 'High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from \emph{adaptive control theory} and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $\epsilon$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrt{\epsilon})$ iterations when regressors are constant - matching lower bounds due to Nesterov of $\Omega(1/\sqrt{\epsilon})$, up to a $\log(1/\epsilon)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov’s provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.'
volume: 144
URL: https://proceedings.mlr.press/v144/gaudio21a.html
PDF: http://proceedings.mlr.press/v144/gaudio21a/gaudio21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-gaudio21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Joseph E.
family: Gaudio
- given: Anuradha M.
family: Annaswamy
- given: José M.
family: Moreu
- given: Michael A.
family: Bolender
- given: Travis E.
family: Gibson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 636-650
id: gaudio21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 636
lastpage: 650
published: 2021-05-29 00:00:00 +0000
- title: 'Stability and Identification of Random Asynchronous Linear Time-Invariant Systems'
abstract: 'In many computational tasks and dynamical systems, asynchrony and randomization are naturally present and have been considered as ways to increase the speed and reduce the cost of computation while compromising the accuracy and convergence rate. In this work, we show the additional benefits of randomization and asynchrony on the stability of linear dynamical systems. We introduce a natural model for random asynchronous linear time-invariant (LTI) systems which generalizes the standard (synchronous) LTI systems. In this model, each state variable is updated randomly and asynchronously with some probability according to the underlying system dynamics. We examine how the mean-square stability of random asynchronous LTI systems vary with respect to randomization and asynchrony. Surprisingly, we show that the stability of random asynchronous LTI systems does not imply or is not implied by the stability of the synchronous variant of the system and an unstable synchronous system can be stabilized via randomization and/or asynchrony. We further study a special case of the introduced model, namely randomized LTI systems, where each state element is updated randomly with some fixed but unknown probability. We consider the problem of system identification of unknown randomized LTI systems using the precise characterization of mean-square stability via extended Lyapunov equation. For unknown randomized LTI systems, we propose a systematic identification method to recover the underlying dynamics. Given a single input/output trajectory, our method estimates the model parameters that govern the system dynamics, the update probability of state variables, and the noise covariance using the correlation matrices of collected data and the extended Lyapunov equation. Finally, we empirically demonstrate that the proposed method consistently recovers the underlying system dynamics with optimal rate.'
volume: 144
URL: https://proceedings.mlr.press/v144/lale21a.html
PDF: http://proceedings.mlr.press/v144/lale21a/lale21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-lale21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Sahin
family: Lale
- given: Oguzhan
family: Teke
- given: Babak
family: Hassibi
- given: Anima
family: Anandkumar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 651-663
id: lale21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 651
lastpage: 663
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Stabilizing Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory'
abstract: 'The principal task to control dynamical systems is to ensure their stability. When the system is unknown, robust approaches are promising since they aim to stabilize a large set of plausible systems simultaneously. We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR). We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set. We further show that the feasibility conditions of the proposed SDPs are \emph{equivalent}. Using the derived robust controller syntheses, we propose an efficient data dependent algorithm – \textsc{eXploration} – that with high probability quickly identifies a stabilizing controller. Our approach can be used to initialize existing algorithms that require a stabilizing controller as an input while adding constant to the regret. We further propose different heuristics which empirically reduce the number of steps taken by \textsc{eXploration} and reduce the suffered cost while searching for a stabilizing controller.'
volume: 144
URL: https://proceedings.mlr.press/v144/treven21a.html
PDF: http://proceedings.mlr.press/v144/treven21a/treven21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-treven21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Lenart
family: Treven
- given: Sebastian
family: Curi
- given: Mojmír
family: Mutný
- given: Andreas
family: Krause
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 664-676
id: treven21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 664
lastpage: 676
published: 2021-05-29 00:00:00 +0000
- title: 'Training deep residual networks for uniform approximation guarantees'
abstract: 'It has recently been shown that deep residual networks with sufficiently high depth, but bounded width, are capable of universal approximation in the supremum norm sense. Based on these results, we show how to modify existing training algorithms for deep residual networks so as to provide approximation bounds for the test error, in the supremum norm, based on the training error. Our methods are based on control-theoretic interpretations of these networks both in discrete and continuous time, and establish that it is enough to suitably constrain the set of parameters being learned in a way that is compatible with most currently used training algorithms.'
volume: 144
URL: https://proceedings.mlr.press/v144/marchi21a.html
PDF: http://proceedings.mlr.press/v144/marchi21a/marchi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-marchi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Matteo
family: Marchi
- given: Bahman
family: Gharesifard
- given: Paulo
family: Tabuada
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 677-688
id: marchi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 677
lastpage: 688
published: 2021-05-29 00:00:00 +0000
- title: 'LEOC: A Principled Method in Integrating Reinforcement Learning and Classical Control Theory'
abstract: 'There have been attempts in reinforcement learning to exploit a priori knowledge about the structure of the system. This paper proposes a hybrid reinforcement learning controller which dynamically interpolates a model-based linear controller and an arbitrary differentiable policy. The linear controller is designed based on local linearised model knowledge, and stabilises the system in a neighbourhood about an operating point. The coefficients of interpolation between the two controllers are determined by a scaled distance function measuring the distance between the current state and the operating point. The overall hybrid controller is proven to maintain the stability guarantee around the neighborhood of the operating point and still possess the universal function approximation property of the arbitrary non-linear policy. Learning has been done on both model-based (PILCO) and model-free (DDPG) frameworks. Simulation experiments performed in OpenAI gym demonstrate stability and robustness of the proposed hybrid controller. This paper thus introduces a principled method allowing for the direct importing of control methodology into reinforcement learning.'
volume: 144
URL: https://proceedings.mlr.press/v144/zhang21b.html
PDF: http://proceedings.mlr.press/v144/zhang21b/zhang21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zhang21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Naifu
family: Zhang
- given: Nicholas
family: Capel
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 689-701
id: zhang21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 689
lastpage: 701
published: 2021-05-29 00:00:00 +0000
- title: 'Primal-dual Learning for the Model-free Risk-constrained Linear Quadratic Regulator'
abstract: 'Risk-aware control, though with promise to tackle unexpected events, requires a known exact dynamical model. In this work, we propose a model-free framework to learn a risk-aware controller of a linear system. We formulate it as a discrete-time infinite-horizon LQR problem with a state predictive variance constraint. Since its optimal policy is known as an affine feedback, i.e., $u^* = -Kx+l$, we alternatively optimize the gain pair $(K,l)$ by designing a primal-dual learning algorithm. First, we observe that the Lagrangian function enjoys an important local gradient dominance property. Based on it, we then show that there is no duality gap despite the non-convex optimization landscape. Furthermore, we propose a primal-dual algorithm with global convergence to learn the optimal policy-multiplier pair. Finally, we validate our results via simulations.'
volume: 144
URL: https://proceedings.mlr.press/v144/zhao21b.html
PDF: http://proceedings.mlr.press/v144/zhao21b/zhao21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zhao21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Feiran
family: Zhao
- given: Keyou
family: You
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 702-714
id: zhao21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 702
lastpage: 714
published: 2021-05-29 00:00:00 +0000
- title: 'Exploiting Sparsity for Neural Network Verification'
abstract: 'The problem of verifying the properties of a neural network has never been more important. This task is often done by bounding the activation functions in the network. Some approaches are more conservative than others and in general there is a trade-off between complexity and conservativeness. There has been significant progress to improve the efficiency and the accuracy of these methods. We investigate the sparsity that arises in a recently proposed semi-definite programming framework to verify a fully connected feed-forward neural network. We show that due to the intrinsic cascading structure of the neural network the constraint matrices in the semi-definite program form a block-arrow pattern and satisfy conditions for chordal sparsity. We reformulate and implement the optimisation problem, showing a significant speed-up in computation, without sacrificing solution accuracy.'
volume: 144
URL: https://proceedings.mlr.press/v144/newton21a.html
PDF: http://proceedings.mlr.press/v144/newton21a/newton21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-newton21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Matthew
family: Newton
- given: Antonis
family: Papachristodoulou
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 715-727
id: newton21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 715
lastpage: 727
published: 2021-05-29 00:00:00 +0000
- title: 'Uncertain-aware Safe Exploratory Planning using Gaussian Process and Neural Control Contraction Metric'
abstract: 'Robots operating in unstructured, complex, and changing real-world environments should navigate and maintain safety while collecting data about its environment and updating its model dynamics. In this paper, we consider the problem of using a robot to explore an environment with an unknown, state-dependent disturbance to the dynamics and forbidden areas. The goal of the robot is to safely collect observations on the disturbance and construct an accurate estimate of the underlying function. We use Gaussian process to get an estimate of the disturbance from data with a high-confidence bound on the regression error. Furthermore, we use neural contraction metrics to derive a tracking controller and the corresponding high-confidence uncertainty tube around the nominal trajectory planned for the robot, based on the estimate of the disturbance. From the robustness of the Contraction Metric, error bound can be pre-computed and used by the motion planner such that the actual trajectory is guaranteed to be safe. '
volume: 144
URL: https://proceedings.mlr.press/v144/sun21a.html
PDF: http://proceedings.mlr.press/v144/sun21a/sun21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-sun21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Dawei
family: Sun
- given: Mohammad Javad
family: Khojasteh
- given: Shubhanshu
family: Shekhar
- given: Chuchu
family: Fan
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 728-741
id: sun21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 728
lastpage: 741
published: 2021-05-29 00:00:00 +0000
- title: 'Stable Online Control of Linear Time-Varying Systems'
abstract: 'Linear time-varying (LTV) systems are widely used for modeling real-world dynamical systems due to their generality and simplicity. Providing stability guarantees for LTV systems is one of the central problems in control theory. However, existing approaches that guarantee stability typically lead to significantly sub-optimal cumulative control cost in online settings where only current or short-term system information is available. In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost. The proposed method incorporates a state covariance constraint into the semi-definite programming (SDP) formulation of the LQ optimal controller. We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example. '
volume: 144
URL: https://proceedings.mlr.press/v144/qu21a.html
PDF: http://proceedings.mlr.press/v144/qu21a/qu21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-qu21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Guannan
family: Qu
- given: Yuanyuan
family: Shi
- given: Sahin
family: Lale
- given: Anima
family: Anandkumar
- given: Adam
family: Wierman
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 742-753
id: qu21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 742
lastpage: 753
published: 2021-05-29 00:00:00 +0000
- title: 'ARDL - A Library for Adaptive Robotic Dynamics Learning'
abstract: 'Dynamics learning and adaptive control algorithms have received a lack of support from robot dynamics libraries over the years. Only a few existing libraries like Pinocchio implement the standard regressor for basic model learning. In this work we introduce an open-source dynamics library specifically designed to provide support for dynamics learning and online adaptive control algorithms. Alongside established kinematics and dynamics computations, our new dynamics library provides computation for the standard, the Slotine-Li and the filtered regressor matrices found in adaptive control algorithms. We demonstrate the library through several existing adaptive control algorithms, alongside a new online simultaneous Semi-Parametric model using a Radial Basis Function Neural Network augmented with a newly derived consistency transform.'
volume: 144
URL: https://proceedings.mlr.press/v144/smith21a.html
PDF: http://proceedings.mlr.press/v144/smith21a/smith21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-smith21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Joshua
family: Smith
- given: Michael
family: Mistry
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 754-766
id: smith21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 754
lastpage: 766
published: 2021-05-29 00:00:00 +0000
- title: 'Linear Regression over Networks with Communication Guarantees'
abstract: 'A key functionality of emerging connected autonomous systems such as smart cities, smart transportation systems, and the industrial Internet-of-Things, is the ability to process and learn from data collected at different physical locations. This is increasingly attracting attention under the terms of distributed learning and federated learning. However, in connected autonomous systems, data transfer takes place over communication networks with often limited resources. This paper examines algorithms for communication-efficient learning for linear regression tasks by exploiting the informativeness of the data. The developed algorithms enable a tradeoff between communication and learning with theoretical performance guarantees and efficient practical implementations.'
volume: 144
URL: https://proceedings.mlr.press/v144/gatsis21a.html
PDF: http://proceedings.mlr.press/v144/gatsis21a/gatsis21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-gatsis21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Konstantinos
family: Gatsis
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 767-778
id: gatsis21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 767
lastpage: 778
published: 2021-05-29 00:00:00 +0000
- title: 'Nested Mixture of Experts: Cooperative and Competitive Learning of Hybrid Dynamical System'
abstract: 'Model-based reinforcement learning (MBRL) algorithms can attain significant sample efficiency but require an appropriate network structure to represent system dynamics. Current approaches include white-box modeling using analytic parameterizations and black-box modeling using deep neural networks. However, both can suffer from a bias-variance trade-off in the learning process, and neither provides a structured method for injecting domain knowledge into the network. As an alternative, gray-box modeling leverages prior knowledge in neural network training but only for simple systems. In this paper, we devise a nested mixture of experts (NMOE) for representing and learning hybrid dynamical systems. An NMOE combines both white-box and black-box models while optimizing bias-variance trade-off. Moreover, an NMOE provides a structured method for incorporating various types of prior knowledge by training the associative experts cooperatively or competitively. The prior knowledge includes information on robots’ physical contacts with the environments as well as their kinematic and dynamic properties. In this paper, we demonstrate how to incorporate prior knowledge into our NMOE in various continuous control domains, including hybrid dynamical systems. We also show the effectiveness of our method in terms of data-efficiency, generalization to unseen data, and bias-variance trade-off. Finally, we evaluate our NMOE using an MBRL setup, where the model is integrated with a model-based controller and trained online.'
volume: 144
URL: https://proceedings.mlr.press/v144/ahn21a.html
PDF: http://proceedings.mlr.press/v144/ahn21a/ahn21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ahn21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Junhyeok
family: Ahn
- given: Luis
family: Sentis
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 779-790
id: ahn21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 779
lastpage: 790
published: 2021-05-29 00:00:00 +0000
- title: 'Learning without Knowing: Unobserved Context in Continuous Transfer Reinforcement Learning'
abstract: 'In this paper, we consider a transfer Reinforcement Learning (RL) problem in continuous state and action spaces, under unobserved contextual information. The context here can represent a specific unique mental view of the world that an expert agent has formed through past interactions with this world. We assume that this context is not accessible to a learner agent who can only observe the expert data and does not know how they were generated. Then, our goal is to use the context-aware continuous expert data to learn an optimal context-unaware policy for the learner using only a few new data samples. To this date, such problems are typically solved using imitation learning that assumes that both the expert and learner agents have access to the same information. However, if the learner does not know the expert context, using the expert data alone will result in a biased learner policy and will require many new data samples to improve. To address this challenge, in this paper, we formulate the learning problem that the learner agent solves as a causal bound-constrained Multi-Armed-Bandit (MAB) problem. The arms of this MAB correspond to a set of basis policy functions that can be initialized in an unsupervised way using the expert data and represent the different expert behaviors affected by the unobserved context. On the other hand, the MAB constraints correspond to causal bounds on the accumulated rewards of these basis policy functions that we also compute from the expert data. The solution to this MAB allows the learner agent to select the best basis policy and improve it online. And the use of causal bounds reduces the exploration variance and, therefore, improves the learning rate. We provide numerical experiments on an autonomous driving example that show that our proposed transfer RL method improves the learner’s policy faster compared to imitation learning methods and enjoys much lower variance during training.'
volume: 144
URL: https://proceedings.mlr.press/v144/liu21a.html
PDF: http://proceedings.mlr.press/v144/liu21a/liu21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-liu21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Chenyu
family: Liu
- given: Yan
family: Zhang
- given: Yi
family: Shen
- given: Michael M.
family: Zavlanos
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 791-802
id: liu21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 791
lastpage: 802
published: 2021-05-29 00:00:00 +0000
- title: 'Data-Driven Abstraction of Monotone Systems'
abstract: 'In this paper, we introduce an approach for data-driven abstraction of monotone dynamical systems. First, we present an approach to find the optimal approximation of the dynamics of an unknown system by a set-valued map based on a set of transitions generated by the system. Then we show that the dynamical system induced by the introduced map is equivalent (in the sense of alternating bisimulation) to a finite state transition system which can be used to synthesize controllers using the well-established symbolic control techniques. We show the effectiveness of the approach on a safety controller synthesis problem.'
volume: 144
URL: https://proceedings.mlr.press/v144/makdesi21a.html
PDF: http://proceedings.mlr.press/v144/makdesi21a/makdesi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-makdesi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Anas
family: Makdesi
- given: Antoine
family: Girard
- given: Laurent
family: Fribourg
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 803-814
id: makdesi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 803
lastpage: 814
published: 2021-05-29 00:00:00 +0000
- title: 'Reward Biased Maximum Likelihood Estimation for Reinforcement Learning'
abstract: 'The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of Markov chains was proposed in (Kumar and Becker, 1982) to overcome the central obstacle of what is variously called the fundamental “closed-identifiability problem” of adaptive control (Borkar and Varaiya, 1979), the “dual control problem” by Feldbaum (Feldbaum, 1960a,b), or, contemporaneously, the “exploration vs. exploitation problem”. It exploited the key observation that since the maximum likelihood parameter estimator can asymptotically identify the closed-transition probabilities under a certainty equivalent approach (Borkar and Varaiya, 1979), the limiting parameter estimates must necessarily have an optimal reward that is less than the optimal reward attainable for the true but unknown system. Hence it proposed a counteracting reverse bias in favor of parameters with larger optimal rewards, providing a carefully structured solution to the fundamental problem alluded to above. It thereby proposed an optimistic approach of favoring parameters with larger optimal rewards, now known as “optimism in the face of uncertainty.” The RBMLE approach has been proved to be long-term average reward optimal in a variety of contexts including controlled Markov chains, linear quadratic Gaussian (LQG) systems, some nonlinear systems, and diffusions. However, modern attention is focused on the much finer notion of “regret,” or finite-time performance for all time, espoused by (Lai and Robbins, 1985). Recent analysis of RBMLE for multi-armed stochastic bandits (Liu et al., 2020) and linear contextual bandits (Hung et al., 2020) has shown that it not only has state-of-the-art regret, but it also exhibits empirical performance comparable to or better than the best current contenders, and leads to several new and strikingly simple index policies for these classical problems. Motivated by this, we examine the finite-time performance of RBMLE for reinforcement learning tasks that involve the general problem of optimal control of unknown Markov Decision Processes. We show that it has a regret of O(log T ) over a time horizon of T, similar to state-of-art algorithms.'
volume: 144
URL: https://proceedings.mlr.press/v144/mete21a.html
PDF: http://proceedings.mlr.press/v144/mete21a/mete21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-mete21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Akshay
family: Mete
- given: Rahul
family: Singh
- given: Xi
family: Liu
- given: P. R.
family: Kumar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 815-827
id: mete21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 815
lastpage: 827
published: 2021-05-29 00:00:00 +0000
- title: 'Feedback from Pixels: Output Regulation via Learning-based Scene View Synthesis'
abstract: 'We propose a novel controller synthesis involving feedback from pixels, whereby the measurement is a high dimensional signal representing a pixelated image with Red-Green-Blue (RGB) values. The approach neither requires feature extraction, nor object detection, nor visual correspondence. The control policy does not involve the estimation of states or similar latent representations. Instead, tracking is achieved directly in image space, with a model of the reference signal embedded as required by the internal model principle. The reference signal is generated by a neural network with learning-based scene view synthesis capabilities. Our approach does not require an end-to-end learning of a pixel-to-action control policy. The approach is applied to a motion control problem, namely the longitudinal dynamics of a car-following problem. We show how this approach lend itself to a tractable stability analysis with associated bounds critical to establishing trustworthiness and interpretability of the closed-loop dynamics.'
volume: 144
URL: https://proceedings.mlr.press/v144/abu-khalaf21a.html
PDF: http://proceedings.mlr.press/v144/abu-khalaf21a/abu-khalaf21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-abu-khalaf21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Murad
family: Abu-Khalaf
- given: Sertac
family: Karaman
- given: Daniela
family: Rus
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 828-841
id: abu-khalaf21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 828
lastpage: 841
published: 2021-05-29 00:00:00 +0000
- title: 'Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization'
abstract: 'Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify quadratic bounds on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear time-invariant system in feedback with an approximate model predictive controller given by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closed-loop system.'
volume: 144
URL: https://proceedings.mlr.press/v144/hashemi21a.html
PDF: http://proceedings.mlr.press/v144/hashemi21a/hashemi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-hashemi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Navid
family: Hashemi
- given: Justin
family: Ruths
- given: Mahyar
family: Fazlyab
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 842-853
id: hashemi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 842
lastpage: 853
published: 2021-05-29 00:00:00 +0000
- title: 'Near-Optimal Data Source Selection for Bayesian Learning'
abstract: 'We study a fundamental problem in Bayesian learning, where the goal is to select a set of data sources with minimum cost while achieving a certain learning performance based on the data streams provided by the selected data sources. First, we show that the data source selection problem for Bayesian learning is NP-hard. We then show that the data source selection problem can be transformed into an instance of the submodular set covering problem studied in the literature, and provide a standard greedy algorithm to solve the data source selection problem with provable performance guarantees. Next, we propose a fast greedy algorithm that improves the running times of the standard greedy algorithm, while achieving performance guarantees that are comparable to those of the standard greedy algorithm. We provide insights into the performance guarantees of the greedy algorithms by analyzing special classes of the problem. Finally, we validate the theoretical results using numerical examples, and show that the greedy algorithms work well in practice.'
volume: 144
URL: https://proceedings.mlr.press/v144/ye21a.html
PDF: http://proceedings.mlr.press/v144/ye21a/ye21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ye21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Lintao
family: Ye
- given: Aritra
family: Mitra
- given: Shreyas
family: Sundaram
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 854-865
id: ye21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 854
lastpage: 865
published: 2021-05-29 00:00:00 +0000
- title: 'Accelerated Concurrent Learning Algorithms via Data-Driven Hybrid Dynamics and Nonsmooth ODEs'
abstract: 'We introduce a novel class of data-driven accelerated concurrent learning algorithms. Thesealgorithms are suitable for the solution of high-performance system identification and pa-rameter estimation problems withconvergence certificates, in settings where the standardpersistence of excitation (PE) condition is difficult to verifya priori. In order to achieve(uniform) fast convergence, the proposed algorithms exploit the existence of information-rich data sets, as well as certain non-smooth regularizations that generate a family ofnon-Lipschitz dynamics modeled as data-driven ordinary differential equations (DD-ODEs)and/or data-driven hybrid dynamical systems (DD-HDS). In each case, we provide stabilityand convergence certificates via Lyapunov theory. Moreover, to illustrate the advantages ofthe proposed algorithms, we consider an online estimation problem in Lithium-Ion batterieswhere the satisfaction of the PE condition is difficult to verify.'
volume: 144
URL: https://proceedings.mlr.press/v144/ochoa21a.html
PDF: http://proceedings.mlr.press/v144/ochoa21a/ochoa21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ochoa21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Daniel E.
family: Ochoa
- given: Jorge I.
family: Poveda
- given: Anantharam
family: Subbaraman
- given: Gerd S.
family: Schmidt
- given: Farshad R.
family: Pour-Safaei
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 866-878
id: ochoa21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 866
lastpage: 878
published: 2021-05-29 00:00:00 +0000
- title: 'Learning based attacks in Cyber Physical Systems: Exploration, Detection, and Control Cost trade-offs'
abstract: 'We study the problem of learning-based attacks in linear systems, where the communication channel between the controller and the plant can be hijacked by a malicious attacker. We assume the attacker learns the dynamics of the system from observations, then overrides the controller’s actuation signal, while mimicking legitimate operation by providing fictitious sensor readings to the controller. On the other hand, the controller is on a lookout to detect the presence of the attacker and tries to enhance the detection performance by carefully crafting its control signals. We study the trade-offs between the information acquired by the attacker from observations, the detection capabilities of the controller, and the control cost. Specifically, we provide tight upper and lower bounds on the expected $\epsilon$-deception time, namely the time required by the controller to make a decision regarding the presence of an attacker with confidence at least $(1-\epsilon\log(1/\epsilon))$. We then show a probabilistic lower bound on the time that must be spent by the attacker learning the system, in order for the controller to have a given expected $\epsilon$-deception time. We show that this bound is also order optimal, in the sense that if the attacker satisfies it, then there exists a learning algorithm with the given order expected deception time. Finally, we show a lower bound on the expected energy expenditure required to guarantee detection with confidence at least $1-\epsilon \log(1/\epsilon)$.'
volume: 144
URL: https://proceedings.mlr.press/v144/rangi21a.html
PDF: http://proceedings.mlr.press/v144/rangi21a/rangi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-rangi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Anshuka
family: Rangi
- given: Mohammad Javad
family: Khojasteh
- given: Massimo
family: Franceschetti
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 879-892
id: rangi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 879
lastpage: 892
published: 2021-05-29 00:00:00 +0000
- title: 'Minimax Adaptive Control for a Finite Set of Linear Systems'
abstract: 'An adaptive controller is derived for linear time-invariant systems with uncertain parameters restricted to a finite set, such that the closed loop system including the non-linear learning procedure is stable and satisfies a pre-specified l2-gain bound from disturbance to error. As a result, robustness to unmodelled (linear and non-linear) dynamics follows from the small gain theorem. The approach is based on a dynamic zero-sum game formulation with quadratic cost. Explicit upper and lower bounds on the optimal value function are stated and a simple formula for an adaptive controller achieving the upper bound is given. The controller uses semi-definite programming for optimal trade-off between exploration and exploitation. Once the uncertain parameters have been sufficiently estimated, the controller behaves like standard H-infinity state feedback.'
volume: 144
URL: https://proceedings.mlr.press/v144/rantzer21a.html
PDF: http://proceedings.mlr.press/v144/rantzer21a/rantzer21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-rantzer21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Anders
family: Rantzer
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 893-904
id: rantzer21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 893
lastpage: 904
published: 2021-05-29 00:00:00 +0000
- title: 'On exploration requirements for learning safety constraints'
abstract: 'Enforcing safety for dynamical systems is challenging, since it requires constraint satisfaction along trajectory predictions. Equivalent control constraints can be computed in the form of sets that enforce positive invariance, and can thus guarantee safety in feedback controllers without predictions. However, these constraints are cumbersome to compute from models, and it is not yet well established how to infer constraints from data. In this paper, we shed light on the key objects involved in learning control constraints from data in a model-free setting. In particular, we discuss the family of constraints that enforce safety in the context of a nominal control policy, and expose that these constraints do not need to be accurate everywhere. They only need to correctly exclude a subset of the state-actions that would cause failure, which we call the critical set.'
volume: 144
URL: https://proceedings.mlr.press/v144/massiani21a.html
PDF: http://proceedings.mlr.press/v144/massiani21a/massiani21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-massiani21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Pierre-François
family: Massiani
- given: Steve
family: Heim
- given: Sebastian
family: Trimpe
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 905-916
id: massiani21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 905
lastpage: 916
published: 2021-05-29 00:00:00 +0000
- title: 'Traffic Forecasting using Vehicle-to-Vehicle Communication'
abstract: 'Vehicle-to-vehicle (V2V) communication is utilized in order to provide real-time on-board traffic predictions. A hybrid approach is proposed where physics based models are supplemented with deep learning. A recurrent neural network is used to improve the accuracy of predictions given by first principle models. Our hybrid model is able to predict the velocity of individual vehicles up to 40 seconds into the future with improved accuracy over physics based baselines. A comprehensive study is conducted to evaluate different methods of integrating physics with deep learning.'
volume: 144
URL: https://proceedings.mlr.press/v144/wong21a.html
PDF: http://proceedings.mlr.press/v144/wong21a/wong21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-wong21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Steven
family: Wong
- given: Lejun
family: Jiang
- given: Robin
family: Walters
- given: Tamás G.
family: Molnár
- given: Gábor
family: Orosz
- given: Rose
family: Yu
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 917-929
id: wong21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 917
lastpage: 929
published: 2021-05-29 00:00:00 +0000
- title: 'Learning the Dynamics of Time Delay Systems with Trainable Delays'
abstract: 'In this paper, we propose a delay learning algorithm for time delay neural networks (TDNNs) based on mini-batch gradient descent. We show that the proposed algorithm is suitable for learning the dynamics of nonlinear time delay systems using TDNNs with trainable delays. The delays are introduced in the input layer and are learned with the same approach as weights and biases. The learned delays are easy to interpret and they are not restricted to discrete values. We demonstrate the method with an example of learning the dynamics of an autonomous time delay system. We show the performance of two proposed network architectures with trainable delays and compare it to a standard TDNN which has a large number of fixed (non-trainable) input delays. We demonstrate that the networks with trainable input delays achieve significantly better performance in closed-loop simulations compared to the standard TDNN. We also highlight that possible undesired local minima may be caused by the delays in the networks.'
volume: 144
URL: https://proceedings.mlr.press/v144/ji21a.html
PDF: http://proceedings.mlr.press/v144/ji21a/ji21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ji21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Xunbi A.
family: Ji
- given: Tamás G.
family: Molnár
- given: Sergei S.
family: Avedisov
- given: Gábor
family: Orosz
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 930-942
id: ji21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 930
lastpage: 942
published: 2021-05-29 00:00:00 +0000
- title: 'Decoupling dynamics and sampling: RNNs for unevenly sampled data and flexible online predictions'
abstract: 'Recurrent neural networks (RNNs) incorporate a memory state which makes them suitable for time series analysis. The Linear Antisymmetric RNN (LARNN) is a previously suggested recurrent layer which is proven to ensure long-term memory using a simple structure without gating. The LARNN is based on an ordinary differential equation which is solved using numerical methods with a defined step size variable. In this paper, this step size is related to the sampling frequency of the data used for training and testing of the models. In particular, industrial datasets often consist of measurements that are sampled and analyzed manually or sampled only for sufficiently large change. This is usually handled by resampling and performing some kind of interpolation to gain a dataset with evenly sampled data. However, in doing so, one has to apply several assumption regarding the nature of the data (e.g. linear interpolation) and valuable information about the dynamics captured by the actual sampling is lost. Furthermore, interpolation is non-causal by nature, and thus poses a challenge in an online setting as future values are not known. By using information about sampling time in the LARNN structure, interpolation is obsolete as the model decouples the dynamics of the sampled system from the sampling regime. Furthermore, the suggested structure enables predictions related to specific times in the future, resulting in updated predictions regardless of whether new measurements are available. The performance of the LARNN is compared to an LSTM on a simulated industrial benchmark system.'
volume: 144
URL: https://proceedings.mlr.press/v144/moe21a.html
PDF: http://proceedings.mlr.press/v144/moe21a/moe21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-moe21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Signe
family: Moe
- given: Camilla
family: Sterud
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 943-953
id: moe21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 943
lastpage: 953
published: 2021-05-29 00:00:00 +0000
- title: 'How Are Learned Perception-Based Controllers Impacted by the Limits of Robust Control?'
abstract: 'The difficulty of optimal control problems has classically been characterized in terms of system properties such as minimum eigenvalues of controllability/observability gramians. We revisit these characterizations in the context of the increasing popularity of data-driven techniques like reinforcement learning (RL) in control settings where input observations are high-dimensional images and transition dynamics are not known beforehand. Specifically, we ask: to what extent are quantifiable control and perceptual difficulty metrics of a control task predictive of the performance of various families of data-driven controllers? We modulate two different types of partial observability in a cartpole “stick-balancing” problem – the height of one visible fixation point on the cartpole, which can be used to tune fundamental limits of performance achievable by any controller, and by using depth or RGB image observations of the scene, we add different levels of perception noise without affecting system dynamics. In these settings, we empirically study two popular families of controllers: RL and system identification-based $H_\infty$ control, using visually estimated system state. Our results show the fundamental limits of robust control have corresponding implications for the sample-efficiency and performance of learned perception-based controllers.'
volume: 144
URL: https://proceedings.mlr.press/v144/xu21b.html
PDF: http://proceedings.mlr.press/v144/xu21b/xu21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-xu21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Jingxi
family: Xu
- given: Bruce
family: Lee
- given: Nikolai
family: Matni
- given: Dinesh
family: Jayaraman
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 954-966
id: xu21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 954
lastpage: 966
published: 2021-05-29 00:00:00 +0000
- title: 'Finite-time System Identification and Adaptive Control in Autoregressive Exogenous Systems'
abstract: 'Autoregressive exogenous (ARX) systems are the general class of input-output dynamical system used for modeling stochastic linear dynamical system (LDS) including partially observable LDS such as LQG systems. In this work, we study the problem of system identification and adaptive control of unknown ARX systems. We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection. Using these guarantees, we design adaptive control algorithms for unknown ARX systems with arbitrary strongly convex or non-strongly convex quadratic regulating costs. Under strongly convex cost functions, we design an adaptive control algorithm based on online gradient descent to design and update the controllers that are constructed via a convex controller reparametrization. We show that our algorithm has $\Tilde{O}(\sqrt{T})$ regret via explore and commit approach and if the model estimates are updated in epochs using closed-loop data collection, it attains the optimal regret of $\text{polylog}(T)$ after $T$ time-steps of interaction. For the case of non-strongly convex quadratic cost functions, we propose an adaptive control algorithm that deploys the optimism in the face of uncertainty principle to design the controller. In this setting, we show that the explore and commit approach has a regret upper bound of $\Tilde{O}(T^{2/3})$, and the adaptive control with continuous model estimate updates attains $\Tilde{O}(\sqrt{T})$ regret after $T$ time-steps. '
volume: 144
URL: https://proceedings.mlr.press/v144/lale21b.html
PDF: http://proceedings.mlr.press/v144/lale21b/lale21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-lale21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Sahin
family: Lale
- given: Kamyar
family: Azizzadenesheli
- given: Babak
family: Hassibi
- given: Anima
family: Anandkumar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 967-979
id: lale21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 967
lastpage: 979
published: 2021-05-29 00:00:00 +0000
- title: 'Automating Discovery of Physics-Informed Neural State Space Models via Learning and Evolution'
abstract: 'Recent works exploring deep learning application to dynamical systems modeling have demonstrated that embedding physical priors into neural networks can yield more effective, physically-realistic, and data-efficient models. However, in the absence of complete prior knowledge of a dynamical system’s physical characteristics, determining the optimal structure and optimization strategy for these models can be difficult. In this work, we explore methods for discovering neural state space dynamics models for system identification. Starting with a design space of block-oriented state space models and structured linear maps with strong physical priors, we encode these components into a model genome alongside network structure, penalty constraints, and optimization hyperparameters. Demonstrating the overall utility of the design space, we employ an asynchronous genetic search algorithm that alternates between model selection and optimization and obtains accurate physically consistent models of three physical systems: an aerodynamics body, a continuous stirred tank reactor, and a two tank interacting system.'
volume: 144
URL: https://proceedings.mlr.press/v144/skomski21a.html
PDF: http://proceedings.mlr.press/v144/skomski21a/skomski21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-skomski21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Elliott
family: Skomski
- given: Ján
family: Drgoňa
- given: Aaron
family: Tuor
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 980-991
id: skomski21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 980
lastpage: 991
published: 2021-05-29 00:00:00 +0000
- title: 'Offset-free setpoint tracking using neural network controllers'
abstract: 'In this paper, we present a method to analyze local and global stability in offset-free setpoint tracking using neural network controllers and we provide ellipsoidal inner approximations of the corresponding region of attraction. We consider a feedback interconnection using a neural network controller in connection with an integrator, which allows for offset-free tracking of a desired piecewise constant reference that enters the controller as an external input. The feedback interconnection considered in this paper allows for general configurations of the neural network controller that include the special cases of output error and state feedback. Exploiting the fact that activation functions used in neural networks are slope-restricted, we derive linear matrix inequalities to verify stability using Lyapunov theory. After stating a global stability result, we present less conservative local stability conditions (i) for a given reference and (ii) for any reference from a certain set. The latter result even enables guaranteed tracking under setpoint changes using a reference governor which can lead to a significant increase of the region of attraction. Finally, we demonstrate the applicability of our analysis by verifying stability and offset-free tracking of a neural network controller that was trained to stabilize an inverted pendulum.'
volume: 144
URL: https://proceedings.mlr.press/v144/pauli21a.html
PDF: http://proceedings.mlr.press/v144/pauli21a/pauli21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-pauli21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Patricia
family: Pauli
- given: Johannes
family: Köhler
- given: Julian
family: Berberich
- given: Anne
family: Koch
- given: Frank
family: Allgöwer
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 992-1003
id: pauli21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 992
lastpage: 1003
published: 2021-05-29 00:00:00 +0000
- title: 'Maximum Likelihood Signal Matrix Model for Data-Driven Predictive Control'
abstract: 'The paper presents a data-driven predictive control framework based on an implicit input-output mapping derived directly from the signal matrix of collected data. This signal matrix model is derived by maximum likelihood estimation with noise-corrupted data. By linearizing online, the implicit model can be used as a linear constraint to characterize possible trajectories of the system in receding horizon control. The signal matrix can also be updated online with new measurements. This algorithm can be applied to large datasets and slowly time-varying systems, possibly with high noise levels. An additional regularization term on the prediction error can be introduced to enhance the predictability and thus the control performance. Numerical results demonstrate that the proposed signal matrix model predictive control algorithm is effective in multiple applications and performs better than existing data-driven predictive control algorithm.'
volume: 144
URL: https://proceedings.mlr.press/v144/yin21a.html
PDF: http://proceedings.mlr.press/v144/yin21a/yin21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-yin21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Mingzhou
family: Yin
- given: Andrea
family: Iannelli
- given: Roy S.
family: Smith
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1004-1014
id: yin21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1004
lastpage: 1014
published: 2021-05-29 00:00:00 +0000
- title: 'KPC: Learning-Based Model Predictive Control with Deterministic Guarantees'
abstract: 'We propose Kernel Predictive Control (KPC), a learning-based predictive control strategy that enjoys deterministic guarantees of safety. Noise-corrupted samples of the unknown system dynamics are used to learn several models through the formalism of non-parametric kernel regression. By treating each prediction step individually, we dispense with the need of propagating sets through highly non-linear maps, a procedure that often involves multiple conservative approximation steps. Finite-sample error bounds are then used to enforce state-feasibility by employing an efficient robust formulation. We then present a relaxation strategy that exploits on-line data to weaken the optimization problem constraints while preserving safety. Two numerical examples are provided to illustrate the applicability of the proposed control method.'
volume: 144
URL: https://proceedings.mlr.press/v144/maddalena21a.html
PDF: http://proceedings.mlr.press/v144/maddalena21a/maddalena21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-maddalena21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Emilio T.
family: Maddalena
- given: Paul
family: Scharnhorst
- given: Yuning
family: Jiang
- given: Colin N.
family: Jones
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1015-1026
id: maddalena21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1015
lastpage: 1026
published: 2021-05-29 00:00:00 +0000
- title: 'Contraction $\\mathcal{L}_1$-Adaptive Control using Gaussian Processes'
abstract: ' We present a control framework that enables safe simultaneous learning and control for systems subject to uncertainties. The two main constituents are contraction theory-based $\mathcal{L}_1$-adaptive ($\mathcal{CL}_1$) control and Bayesian learning in the form of Gaussian process (GP) regression. The $\mathcal{CL}_1$ controller ensures that control objectives are met while providing safety certificates. Furthermore, the controller incorporates any available data into GP models of uncertainties, which improves performance and enables the motion planner to achieve optimality safely. This way, the safe operation of the system is always guaranteed, even during the learning transients.'
volume: 144
URL: https://proceedings.mlr.press/v144/gahlawat21a.html
PDF: http://proceedings.mlr.press/v144/gahlawat21a/gahlawat21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-gahlawat21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Aditya
family: Gahlawat
- given: Arun
family: Lakshmanan
- given: Lin
family: Song
- given: Andrew
family: Patterson
- given: Zhuohuan
family: Wu
- given: Naira
family: Hovakimyan
- given: Evangelos A.
family: Theodorou
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1027-1040
id: gahlawat21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1027
lastpage: 1040
published: 2021-05-29 00:00:00 +0000
- title: 'Episodic Learning for Safe Bipedal Locomotion with Control Barrier Functions and Projection-to-State Safety'
abstract: 'This paper combines episodic learning and control barrier functions (CBFs) in the setting of bipedal locomotion. The safety guarantees that CBFs provide are only valid with perfect model knowledge; however, this assumption cannot be met on hardware platforms. To address this, we utilize the notion of Projection-to-State safety paired with a machine learning framework in an attempt to learn the model uncertainty as it effects the barrier functions. The proposed approach is demonstrated both in simulation and on hardware for the AMBER-3M bipedal robot in the context of the stepping-stone problem which requires precise foot placement while walking dynamically.'
volume: 144
URL: https://proceedings.mlr.press/v144/csomay-shanklin21a.html
PDF: http://proceedings.mlr.press/v144/csomay-shanklin21a/csomay-shanklin21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-csomay-shanklin21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Noel
family: Csomay-Shanklin
- given: Ryan K.
family: Cosner
- given: Min
family: Dai
- given: Andrew J.
family: Taylor
- given: Aaron D.
family: Ames
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1041-1053
id: csomay-shanklin21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1041
lastpage: 1053
published: 2021-05-29 00:00:00 +0000
- title: 'Faster Policy Learning with Continuous-Time Gradients'
abstract: 'We study the estimation of policy gradients for continuous-time systems with known dynamics. By reframing policy learning in continuous-time, we show that it is possible construct a more efficient and accurate gradient estimator. The standard back-propagation through time estimator (BPTT) computes exact gradients for a crude discretization of the continuous-time system. In contrast, we approximate continuous-time gradients in the original system. With the explicit goal of estimating continuous-time gradients, we are able to discretize adaptively and construct a more efficient policy gradient estimator which we call the Continuous-Time Policy Gradient (CTPG). We show that replacing BPTT policy gradients with more efficient CTPG estimates results in faster and more robust learning in a variety of control tasks and simulators.'
volume: 144
URL: https://proceedings.mlr.press/v144/ainsworth21a.html
PDF: http://proceedings.mlr.press/v144/ainsworth21a/ainsworth21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ainsworth21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Samuel
family: Ainsworth
- given: Kendall
family: Lowrey
- given: John
family: Thickstun
- given: Zaid
family: Harchaoui
- given: Siddhartha
family: Srinivasa
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1054-1067
id: ainsworth21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1054
lastpage: 1067
published: 2021-05-29 00:00:00 +0000
- title: 'Learning How to Solve “Bubble Ball”'
abstract: '“Bubble Ball” is a game built on a 2D physics engine, where a finite set of objects can modify the motion of a bubble-like ball. The objective is to choose the set and the initial configuration of the objects, in order to get the ball to reach a target flag. The presence of obstacles, friction, contact forces and combinatorial object choices make the game hard to solve. In this paper, we propose a hierarchical predictive framework which solves Bubble Ball. Geometric, kinematic and dynamic models are used at different levels of the hierarchy. At each level of the game, data collected during failed iterations are used to update models at all hierarchical level and converge to a feasible solution to the game. The proposed approach successfully solves a large set of Bubble Ball levels within reason-able number of trials. This proposed framework can also be used to solve other physics-based games, especially with limited training data from human demonstrations.'
volume: 144
URL: https://proceedings.mlr.press/v144/lee21a.html
PDF: http://proceedings.mlr.press/v144/lee21a/lee21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-lee21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Hotae
family: Lee
- given: Monimoy
family: Bujarbaruah
- given: Francesco
family: Borrelli
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1068-1079
id: lee21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1068
lastpage: 1079
published: 2021-05-29 00:00:00 +0000
- title: 'Approximate Midpoint Policy Iteration for Linear Quadratic Control'
abstract: 'We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings. The algorithm is a variation of Newton’s method, and we show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy gradient algorithms that achieve quadratic and linear convergence, respectively. We also demonstrate that the algorithm can be approximately implemented without knowledge of the dynamics model by using least-squares estimates of the state-action value function from trajectory data, from which policy improvements can be obtained. With sufficient trajectory data, the policy iterates converge cubically to approximately optimal policies, and this occurs with the same available sample budget as the approximate standard policy iteration. Numerical experiments demonstrate effectiveness of the proposed algorithms.'
volume: 144
URL: https://proceedings.mlr.press/v144/gravell21a.html
PDF: http://proceedings.mlr.press/v144/gravell21a/gravell21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-gravell21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Benjamin
family: Gravell
- given: Iman
family: Shames
- given: Tyler
family: Summers
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1080-1092
id: gravell21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1080
lastpage: 1092
published: 2021-05-29 00:00:00 +0000
- title: 'Safe Reinforcement Learning Using Robust Action Governor'
abstract: 'Reinforcement Learning (RL) is essentially a trial-and-error learning procedure which may cause unsafe behavior during the exploration-and-exploitation process. This hinders the application of RL to real-world control problems, especially to those for safety-critical systems. In this paper, we introduce a framework for safe RL that is based on integration of an RL algorithm with an add-on safety supervision module, called the Robust Action Governor (RAG), which exploits set-theoretic techniques and online optimization to manage safety-related requirements during learning. We illustrate this proposed safe RL framework through an application to automotive adaptive cruise control.'
volume: 144
URL: https://proceedings.mlr.press/v144/li21b.html
PDF: http://proceedings.mlr.press/v144/li21b/li21b.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-li21b.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yutong
family: Li
- given: Nan
family: Li
- given: H. Eric
family: Tseng
- given: Anouck
family: Girard
- given: Dimitar
family: Filev
- given: Ilya
family: Kolmanovsky
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1093-1104
id: li21b
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1093
lastpage: 1104
published: 2021-05-29 00:00:00 +0000
- title: 'SEAGuL: Sample Efficient Adversarially Guided Learning of Value Functions'
abstract: 'Value functions are powerful abstractions broadly used across optimal control and robotics algorithms. Several lines of work have attempted to leverage trajectory optimization to learn value function approximations, usually by solving a large number of trajectory optimization problems as a means to generate training data. Even though these methods point to a promising direction, for sufficiently complex tasks, their sampling requirements can become computationally intractable. In this work, we leverage insights from adversarial learning in order to improve the sampling efficiency of a simple value function learning algorithm. We demonstrate how generating adversarial samples for this task presents a unique challenge due to the loss function that does not admit a closed form expression of the samples, but that instead requires the solution to a nonlinear optimization problem. Our key insight is that by leveraging duality theory from optimization, it is still possible to compute adversarial samples for this learning problem with virtually no computational overhead, including without having to keep track of shifting distributions of approximation errors or having to train generative models. We apply our method, named SEAGuL, to a canonical control task (balancing the acrobot) and a more challenging and highly dynamic nonlinear control task (the perching of a small glider). We demonstrate that compared to random sampling, with the same number of samples, training value function approximations using SEAGuL leads to improved generalization errors that also translate to control performance improvement.'
volume: 144
URL: https://proceedings.mlr.press/v144/landry21a.html
PDF: http://proceedings.mlr.press/v144/landry21a/landry21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-landry21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Benoit
family: Landry
- given: Hongkai
family: Dai
- given: Marco
family: Pavone
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1105-1117
id: landry21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1105
lastpage: 1117
published: 2021-05-29 00:00:00 +0000
- title: 'Fast Stochastic Kalman Gradient Descent for Reinforcement Learning'
abstract: ' As we move towards real world applications, there is an increasing need for scalable, online optimization algorithms capable of dealing with the non-stationarity of the real world. We revisit the problem of online policy evaluation in non-stationary deterministic MDPs through the lense of Kalman filtering. We introduce a randomized regularization technique called Stochastic Kalman Gradient Descent (SKGD) that, combined with a low rank update, generates a sequence of feasible iterates. SKGD is suitable for large scale optimization of non-linear function approximators. We evaluate the performance of SKGD in two controlled experiments, and in one real world application of microgrid control. In our experiments, SKGD is more robust to drift in the transition dynamics than state-of-the-art reinforcement learning algorithms, and the resulting policies are smoother.'
volume: 144
URL: https://proceedings.mlr.press/v144/totaro21a.html
PDF: http://proceedings.mlr.press/v144/totaro21a/totaro21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-totaro21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Simone
family: Totaro
- given: Anders
family: Jonsson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1118-1129
id: totaro21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1118
lastpage: 1129
published: 2021-05-29 00:00:00 +0000
- title: 'Domain Adaptation Using System Invariant Dynamics Models'
abstract: 'Reinforcement learning requires large amounts of training data. For many systems, especially mobile robots, collecting this training data can be expensive and time consuming. We propose a novel domain adaptation method to reduce the amount of training data needed for model-based reinforcement learning methods to train policies for a target system. Using our method, the required amount of target system training data can be reduced by collecting data on a proxy system with similar, but not identical, dynamics on which training data is cheaper to collect. Our method models the underlying dynamics shared between the two systems using a System Invariant Dynamics Model (SIDM), and models each system’s relationship to the SIDM using encoders and decoders. When only limited amounts of target system training data is available, using target and proxy data to train the SIDM, encoders, and decoders can lead to more accurate dynamics models for the target system than using target system data alone. We demonstrate this approach using simulated wheeled robots driving over rough terrain, varying dynamics parameters between the target and proxy system, and find a reduction of 5-20x in the amount of data needed for these systems.'
volume: 144
URL: https://proceedings.mlr.press/v144/wang21c.html
PDF: http://proceedings.mlr.press/v144/wang21c/wang21c.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-wang21c.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Sean J.
family: Wang
- given: Aaron M.
family: Johnson
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1130-1141
id: wang21c
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1130
lastpage: 1141
published: 2021-05-29 00:00:00 +0000
- title: 'Forced Variational Integrator Networks for Prediction and Control of Mechanical Systems'
abstract: 'As deep learning becomes more prevalent for prediction and control of real physical systems, it is important that these models are consistent with physically plausible dynamics. This elicits a problem with how much inductive bias to impose on the model through known physical parameters and principles to reduce complexity of the learning problem to give us more reliable predictions. Recent work employs discrete variational integrators parameterized as a neural network architecture to learn conservative Lagrangian systems. The learned model captures and enforces global energy preserving properties of the system from very few trajectories. However, most real systems are inherently non-conservative and, in practice, we would also like to apply actuation. In this paper we extend this paradigm to account for general forcing (e.g. control input and friction) via discrete D’Alembert’s principle which may ultimately be used for control applications. We show that this forced variational integrator networks (FVIN) architecture allows us to accurately account for energy dissipation and external forcing while still capturing the true underlying energy-based passive dynamics. We show that in application this can result in highly-data efficient model-based control and can predict on real non-conservative systems.'
volume: 144
URL: https://proceedings.mlr.press/v144/havens21a.html
PDF: http://proceedings.mlr.press/v144/havens21a/havens21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-havens21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Aaron
family: Havens
- given: Girish
family: Chowdhary
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1142-1153
id: havens21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1142
lastpage: 1153
published: 2021-05-29 00:00:00 +0000
- title: 'Offline Reinforcement Learning from Images with Latent Space Models'
abstract: 'Offline reinforcement learning (RL) refers to the task of learning policies from a static dataset of environment interactions. Offline RL enables extensive utilization and re-use of historical datasets, while also alleviating safety concerns associated with online exploration, thereby expanding the real-world applicability of RL. Most prior work in offline RL has focused on tasks with compact state representations. However, the ability to learn directly from rich observation spaces like images is critical for real-world applications like robotics. In this work, we build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces. Model-based offline RL algorithms have achieved state of the art results in state based tasks and are minimax optimal. However, they rely crucially on the ability to quantify uncertainty in the model predictions. This is particularly challenging with image observations. To overcome this challenge, we propose to learn a latent-state dynamics model, and represent the uncertainty in the latent space. Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP. Through experiments on a range of challenging image-based locomotion and robotic manipulation tasks, we find that our algorithm significantly outperforms previous offline model-free RL methods as well as state-of-the-art online visual model-based RL methods. Moreover, we also find that our approach excels on an image-based drawer closing task on a real robot using a pre-existing dataset. All results including videos can be found online at \url{https://sites.google.com/view/lompo/}.'
volume: 144
URL: https://proceedings.mlr.press/v144/rafailov21a.html
PDF: http://proceedings.mlr.press/v144/rafailov21a/rafailov21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-rafailov21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Rafael
family: Rafailov
- given: Tianhe
family: Yu
- given: Aravind
family: Rajeswaran
- given: Chelsea
family: Finn
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1154-1168
id: rafailov21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1154
lastpage: 1168
published: 2021-05-29 00:00:00 +0000
- title: 'Adaptive Sampling for Estimating Distributions: A Bayesian Upper Confidence Bound Approach'
abstract: 'The problem of adaptive sampling for estimating probability mass functions (pmf) uniformly well is considered. Performance of the sampling strategy is measured in terms of the worst-case mean squared error. A Bayesian variant of the existing upper confidence bound (UCB) based approaches is proposed. It is shown analytically that the performance of this Bayesian variant is no worse than the existing approaches. The posterior distribution on the pmfs in the Bayesian setting allows for a tighter computation of upper confidence bounds which leads to significant performance gains in practice. Using this approach, adaptive sampling protocols are proposed for estimating SARS-CoV-2 seroprevalence in various groups such as location and ethnicity. The effectiveness of this strategy is discussed using data obtained from a seroprevalence survey in Los Angeles county.'
volume: 144
URL: https://proceedings.mlr.press/v144/kartik21a.html
PDF: http://proceedings.mlr.press/v144/kartik21a/kartik21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-kartik21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Dhruva
family: Kartik
- given: Neeraj
family: Sood
- given: Urbashi
family: Mitra
- given: Tara
family: Javidi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1169-1179
id: kartik21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1169
lastpage: 1179
published: 2021-05-29 00:00:00 +0000
- title: 'A New Objective for Identification of Partially Observed Linear Time-Invariant Dynamical Systems from Input-Output Data'
abstract: 'In this work we consider the identification of partially observed dynamical systems from a single trajectory of arbitrary input-output data. We propose a new optimization objective, derived as a MAP estimator of a certain posterior, that explicitly accounts for model, measurement, and parameter uncertainty. This algorithm identifies a linear time invariant model on a hidden latent space of pre-specified dimension. In contrast to Markov-parameter based least squares approaches, our algorithm can be applied to systems with arbitrary forcing and initial conditions, and we empirically show several magnitude improvement in prediction quality compared to state-of-the-art approaches on both linear and nonlinear systems. Furthermore, we theoretically demonstrate how these existing approaches can be derived from simplifying assumptions on our system that neglect the possibility of model errors.'
volume: 144
URL: https://proceedings.mlr.press/v144/galioto21a.html
PDF: http://proceedings.mlr.press/v144/galioto21a/galioto21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-galioto21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Nicholas
family: Galioto
- given: Alex Arkady
family: Gorodetsky
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1180-1191
id: galioto21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1180
lastpage: 1191
published: 2021-05-29 00:00:00 +0000
- title: 'Generating Adversarial Disturbances for Controller Verification'
abstract: 'We consider the problem of generating maximally adversarial disturbances for a given controller assuming only blackbox access to it. We propose an online learning approach to this problem that adaptively generates disturbances based on control inputs chosen by the controller. The goal of the disturbance generator is to minimize regret versus a benchmark disturbance-generating policy class, i.e., to maximize the cost incurred by the controller as well as possible compared to the best possible disturbance generator in hindsight (chosen from a benchmark policy class). In the setting where the dynamics are linear and the costs are quadratic, we formulate our problem as an online trust region (OTR) problem with memory and present a new online learning algorithm (MOTR) for this problem. We prove that this method competes with the best disturbance generator in hindsight (chosen from a rich class of benchmark policies that includes linear-dynamical disturbance generating policies). We demonstrate our approach on two simulated examples: (i) synthetically generated linear systems, and (ii) generating wind disturbances for the popular PX4 controller in the AirSim simulator. On these examples, we demonstrate that our approach outperforms several baseline approaches (including H-infinity disturbance generation and gradient-based methods).'
volume: 144
URL: https://proceedings.mlr.press/v144/ghai21a.html
PDF: http://proceedings.mlr.press/v144/ghai21a/ghai21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-ghai21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Udaya
family: Ghai
- given: David
family: Snyder
- given: Anirudha
family: Majumdar
- given: Elad
family: Hazan
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1192-1204
id: ghai21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1192
lastpage: 1204
published: 2021-05-29 00:00:00 +0000
- title: 'Optimal Cost Design for Model Predictive Control'
abstract: 'Many robotics domains use some form of nonconvex model predictive control (MPC) for planning, which sets a reduced time horizon, performs trajectory optimization, and replans at every step. The actual task typically requires a much longer horizon than is computationally tractable, and is specified via a cost function that cumulates over that full horizon. For instance, an autonomous car may have a cost function that makes a desired trade-off between efficiency, safety risk, and obeying traffic laws. In this work, we challenge the common assumption that the cost we should specify for MPC should be the same as the ground truth cost for the task. We propose that, because MPC solvers have short horizons, suffer from local optima, and, importantly, fail to account for future replanning ability, in many tasks it could be beneficial to purposefully choose a different cost function for MPC to optimize: one that results in the MPC rollout to have low ground truth cost, rather than the MPC planned trajectory. We formalize this as an optimal cost design problem, and propose a zeroth-order optimization-based approach that enables us to design optimal costs for an MPC planning robot in continuous state and action MDPs. We test our approach in an autonomous driving domain where we find costs different from the ground truth that implicitly compensate for replanning, short horizon, and local minima issues. As an example, planning with vanilla MPC under the learned cost incentivizes the car to delay its decision until later, implicitly accounting for the fact that it will get more information in the future and be able to make a better decision.'
volume: 144
URL: https://proceedings.mlr.press/v144/jain21a.html
PDF: http://proceedings.mlr.press/v144/jain21a/jain21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-jain21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Avik
family: Jain
- given: Lawrence
family: Chan
- given: Daniel S.
family: Brown
- given: Anca D.
family: Dragan
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1205-1217
id: jain21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1205
lastpage: 1217
published: 2021-05-29 00:00:00 +0000
- title: 'Benchmarking Energy-Conserving Neural Networks for Learning Dynamics from Data'
abstract: 'The last few years have witnessed an increased interest in incorporating physics-informed inductive bias in deep learning frameworks. In particular, a growing volume of literature has been exploring ways to enforce energy conservation while using neural networks for learning dynamics from observed time-series data. In this work, we present a comparative analysis of the energy-conserving neural networks - for example, deep Lagrangian network, Hamiltonian neural network, etc. - wherein the underlying physics is encoded in their computation graph. We focus on ten neural network models and explain the similarities and differences between the models. We compare their performance in 4 different physical systems. Our result highlights that using a high-dimensional coordinate system and then imposing restrictions via explicit constraints can lead to higher accuracy in the learned dynamics. We also point out the possibility of leveraging some of these energy-conserving models to design energy-based controllers. '
volume: 144
URL: https://proceedings.mlr.press/v144/zhong21a.html
PDF: http://proceedings.mlr.press/v144/zhong21a/zhong21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-zhong21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yaofeng Desmond
family: Zhong
- given: Biswadip
family: Dey
- given: Amit
family: Chakraborty
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1218-1229
id: zhong21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1218
lastpage: 1229
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Visually Guided Latent Actions for Assistive Teleoperation'
abstract: 'It is challenging for humans — particularly people living with physical disabilities — to control high-dimensional and dexterous robots. Prior work explores how robots can learn embedding functions that map a human’s low-dimensional inputs (e.g., via a joystick) to complex, high-dimensional robot actions for assistive teleoperation; unfortunately, there are many more high-dimensional actions than available low-dimensional inputs! To extract the correct action and maximally assist their human controller, robots must reason over their current context: for example, pressing a joystick right when interacting with a coffee cup indicates a different action than when interacting with food. In this work, we develop assistive robots that condition their latent embeddings on visual inputs. We explore a spectrum of plausible visual encoders and show that incorporating object detectors pretrained on a small amount of cheap and easy-to-collect structured data enables i) accurately and robustly recognizing the current context and ii) generalizing control embeddings to new objects and tasks. In user studies with a high-dimensional physical robot arm, participants leverage this approach to perform new tasks with unseen objects. Our results indicate that structured visual representations improves few-shot performance and is subjectively preferred by users.'
volume: 144
URL: https://proceedings.mlr.press/v144/karamcheti21a.html
PDF: http://proceedings.mlr.press/v144/karamcheti21a/karamcheti21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-karamcheti21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Siddharth
family: Karamcheti
- given: Albert J.
family: Zhai
- given: Dylan P.
family: Losey
- given: Dorsa
family: Sadigh
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1230-1241
id: karamcheti21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1230
lastpage: 1241
published: 2021-05-29 00:00:00 +0000
- title: 'Robust Reinforcement Learning: A Constrained Game-theoretic Approach'
abstract: 'Deep reinforcement learning (RL) methods provide state-of-art performance in complex control tasks. However, it has been widely recognized that RL methods often fail to generalize due to unaccounted uncertainties. In this work, we propose a game theoretic framework for robust reinforcement learning that comprises many previous works as special cases. We formulate robust RL as a constrained minimax game between the RL agent and an environmental agent which represents uncertainties such as model parameter variations and adversarial disturbances. To solve the competitive optimization problems arising in our framework, we propose to use competitive mirror descent (CMD). This method accounts for the interactive nature of the game at each iteration while using Bregman divergences to adapt to the global structure of the constraint set. We demonstrate an RRL policy gradient algorithm that leverages Lagrangian duality and CMD. We empirically show that our algorithm is stable for large step sizes, resulting in faster convergence on linear quadratic games. '
volume: 144
URL: https://proceedings.mlr.press/v144/yu21a.html
PDF: http://proceedings.mlr.press/v144/yu21a/yu21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-yu21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Jing
family: Yu
- given: Clement
family: Gehring
- given: Florian
family: Schäfer
- given: Animashree
family: Anandkumar
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1242-1254
id: yu21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1242
lastpage: 1254
published: 2021-05-29 00:00:00 +0000
- title: 'Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach'
abstract: 'We provide a functional view of distributional robustness motivated by robust statistics and functional analysis. This results in two practical computational approaches for approximate distribution-ally robust nonlinear optimization based on gradient norms and reproducing kernel Hilbert spaces. Our method can be applied to the settings of statistical learning with small sample size and test distribution shift. As a case study, we robustify scenario-based stochastic model predictive control with general nonlinear constraints. In particular, we demonstrate constraint satisfaction with only a small number of scenarios under distribution shift.'
volume: 144
URL: https://proceedings.mlr.press/v144/nemmour21a.html
PDF: http://proceedings.mlr.press/v144/nemmour21a/nemmour21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-nemmour21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Yassine
family: Nemmour
- given: Bernhard
family: Schölkopf
- given: Jia-Jie
family: Zhu
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1255-1269
id: nemmour21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1255
lastpage: 1269
published: 2021-05-29 00:00:00 +0000
- title: 'Regret-optimal measurement-feedback control'
abstract: 'We consider measurement-feedback control in linear dynamical systems from the perspective of regret minimization. Unlike most prior work in this area, we focus on the problem of designing an online controller which competes with the optimal dynamic sequence of control actions selected in hindsight, instead of the best controller in some specic class of controllers. This formulation of regret is attractive when the environment changes over time and no single controller achieves good performance over the entire time horizon. We show that in the measurement-feedback setting, unlike in the full-information setting, there is no single oine controller which outperforms every other oine controller on every disturbance, and propose a new H2-optimal oine controller as a benchmark for the online controller to compete against. We show that the corresponding regret-optimal online controller can be found via a novel reduction to the classical Nehari problem from robust control and present a tight data-dependent bound on its regret.'
volume: 144
URL: https://proceedings.mlr.press/v144/goel21a.html
PDF: http://proceedings.mlr.press/v144/goel21a/goel21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-goel21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Gautam
family: Goel
- given: Babak
family: Hassibi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1270-1280
id: goel21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1270
lastpage: 1280
published: 2021-05-29 00:00:00 +0000
- title: 'Learning Finite-Dimensional Representations For Koopman Operators'
abstract: 'In this work, the problem of learning Koopman operator of a discrete-time autonomous system is considered. The learning problem is formulated as a constrained regularized optimization over the infinite-dimensional space of linear operators. We show that under certain but general conditions, a representer theorem holds for the learning problem. This allows reformulating the problem in a finite-dimensional space without loss of any precision. Following this, we consider various cases of regularization and constraint for the latent Koopman operator including the operator norm, the Frobenius norm, and rank. Subsequently, we derive the corresponding finite-dimensional problem.'
volume: 144
URL: https://proceedings.mlr.press/v144/khosravi21a.html
PDF: http://proceedings.mlr.press/v144/khosravi21a/khosravi21a.pdf
edit: https://github.com/mlresearch//v144/edit/gh-pages/_posts/2021-05-29-khosravi21a.md
series: 'Proceedings of Machine Learning Research'
container-title: 'Proceedings of the 3rd Conference on Learning for Dynamics and Control'
publisher: 'PMLR'
author:
- given: Mohammad
family: Khosravi
editor:
- given: Ali
family: Jadbabaie
- given: John
family: Lygeros
- given: George J.
family: Pappas
- given: Pablo
family: A. Parrilo
- given: Benjamin
family: Recht
- given: Claire J.
family: Tomlin
- given: Melanie N.
family: Zeilinger
page: 1281-1281
id: khosravi21a
issued:
date-parts:
- 2021
- 5
- 29
firstpage: 1281
lastpage: 1281
published: 2021-05-29 00:00:00 +0000