[edit]

Volume 144: Learning for Dynamics and Control, 7-8 June 2021, The Cloud

[edit]

Editors: Ali Jadbabaie, John Lygeros, George J. Pappas, Pablo A. Parrilo, Benjamin Recht, Claire J. Tomlin, Melanie N. Zeilinger

[bib][citeproc]

Preface

Ali Jadbabaie, John Lygeros, George J. Pappas, Pablo A. Parrilo, Benjamin Recht, Claire J. Tomlin, Melanie N. Zeilinger; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1-5

On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning

Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:6-20

Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning

Anoopkumar Sonar, Vincent Pacelli, Anirudha Majumdar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:21-33

Learning-based State Reconstruction for a Scalar Hyperbolic PDE under noisy Lagrangian Sensing

Matthieu Barreau, John Liu, Karl Henrik Johansson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:34-46

Nonlinear Two-Time-Scale Stochastic Approximation: Convergence and Finite-Time Performance

Thinh T. Doan; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:47-47

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

Peng Zhao, Lijun Zhang; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:48-59

Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples

Salar Fattahi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:60-72

Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-Reinforcement Learning

Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:73-86

The benefits of sharing: a cloud-aided performance-driven framework to learn optimal feedback policies

Laura Ferrarotti, Valentina Breschi, Alberto Bemporad; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:87-98

Data-driven design of switching reference governors for brake-by-wire applications

Andrea Sassella, Valentina Breschi, Simone Formentin; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:99-110

Graph Neural Networks for Distributed Linear-Quadratic Control

Fernando Gama, Somayeh Sojoudi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:111-124

Learning to Actively Reduce Memory Requirements for Robot Control Tasks

Meghan Booker, Anirudha Majumdar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:125-137

Non-conservative Design of Robust Tracking Controllers Based on Input-output Data

Liang Xu, Mustafa Sahin Turan, Baiwei Guo, Giancarlo Ferrari-Trecate; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:138-149

Optimal Algorithms for Submodular Maximization with Distributed Constraints

Alexander Robey, Arman Adibi, Brent Schlotfeldt, Hamed Hassani, George J. Pappas; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:150-162

Data-Driven Reachability Analysis Using Matrix Zonotopes

Amr Alanwar, Anne Koch, Frank Allgöwer, Karl Henrik Johansson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:163-175

Learning local modules in dynamic networks

Paul M.J. Van den Hof, Karthik R. Ramaswamy; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:176-188

Data-Driven System Level Synthesis

Anton Xue, Nikolai Matni; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:189-200

Learning Approximate Forward Reachable Sets Using Separating Kernels

Adam J. Thorpe, Kendric R. Ortiz, Meeko M. K. Oishi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:201-212

On Uninformative Optimal Policies in Adaptive LQR with Unknown B-Matrix

Ingvar Ziemann, Henrik Sandberg; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:213-226

Cautious Bayesian Optimization for Efficient and Scalable Policy Search

Lukas P. Fröhlich, Melanie N. Zeilinger, Edgar D. Klenske; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:227-240

Nonlinear state-space identification using deep encoder networks

Gerben Beintema, Roland Toth, Maarten Schoukens; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:241-250

Input Convex Neural Networks for Building MPC

Felix Bünning, Adrian Schalbetter, Ahmed Aboudonia, Mathias Hudoba de Badyn, Philipp Heer, John Lygeros; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:251-262

Abstraction-based branch and bound approach to Q-learning for hybrid optimal control

Benoît Legat, Raphaël M. Jungers, Jean Bouchat; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:263-274

A unified framework for Hamiltonian deep neural networks

Clara Lucía Galimberti, Liang Xu, Giancarlo Ferrari Trecate; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:275-286

Data-Driven Controller Design via Finite-Horizon Dissipativity

Nils Wieler, Julian Berberich, Anne Koch, Frank Allgöwer; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:287-298

Safe Bayesian Optimisation for Controller Design by Utilising the Parameter Space Approach

Lorenz Dörschel, David Stenger, Dirk Abel; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:299-311

Tight sampling and discarding bounds for scenario programs with an arbitrary number of removed samples

Licio Romao, Kostas Margellos, Antonis Papachristodoulou; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:312-323

Probabilistic robust linear quadratic regulators with Gaussian processes

Alexander von Rohr, Matthias Neumann-Brosig, Sebastian Trimpe; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:324-335

Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks

Liyuan Zheng, Yuanyuan Shi, Lillian J. Ratliff, Baosen Zhang; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:336-347

Sequential Topological Representations for Predictive Models of Deformable Objects

Rika Antonova, Anastasia Varava, Peiyang Shi, J. Frederico Carvalho, Danica Kragic; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:348-360

Robust error bounds for quantised and pruned neural networks

Jiaqi Li, Ross Drummond, Stephen R. Duncan; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:361-372

The Dynamics of Gradient Descent for Overparametrized Neural Networks

Siddhartha Satpathi, R Srikant; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:373-384

Bridging Physics-based and Data-driven modeling for Learning Dynamical Systems

Rui Wang, Danielle Maddix, Christos Faloutsos, Yuyang Wang, Rose Yu; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:385-398

Certainty Equivalent Perception-Based Control

Sarah Dean, Benjamin Recht; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:399-411

When to stop value iteration: stability and near-optimality versus computation

Mathieu Granzotto, Romain Postoyan, Dragan Nešić, Lucian Buşoniu, Jamal Daafouz; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:412-424

Learning Recurrent Neural Net Models of Nonlinear Systems

Joshua Hanson, Maxim Raginsky, Eduardo Sontag; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:425-435

A Data Driven, Convex Optimization Approach to Learning Koopman Operators

Mario Sznaier; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:436-446

Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:447-458

Neural Lyapunov Redesign

Arash Mehrjou, Mohammad Ghavamzadeh, Bernhard Schölkopf; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:459-470

Regret Bounds for Adaptive Nonlinear Control

Nicholas M. Boffi, Stephen Tu, Jean-Jacques E. Slotine; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:471-483

Self-Supervised Learning of Long-Horizon Manipulation Tasks with Finite-State Task Machines

Junchi Liang, Abdeslam Boularias; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:484-497

Safely Learning Dynamical Systems from Short Trajectories

Amir Ali Ahmadi, Abraar Chaudhry, Vikas Sindhwani, Stephen Tu; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:498-509

Adaptive Risk Sensitive Model Predictive Control with Stochastic Search

Ziyi Wang, Oswin So, Keuntaek Lee, Evangelos A. Theodorou; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:510-522

Nonlinear Data-Enabled Prediction and Control

Yingzhao Lian, Colin N. Jones; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:523-534

Learning-based feedforward augmentation for steady state rejection of residual dynamics on a nanometer-accurate planar actuator system

Ioannis Proimadis, Yorick Broens, Roland Tóth, Hans Butler; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:535-546

Suboptimal coverings for continuous spaces of control tasks

James A. Preiss, Gaurav S. Sukhatme; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:547-558

Sample Complexity of Linear Quadratic Gaussian (LQG) Control for Output Feedback Systems

Yang Zheng, Luca Furieri, Maryam Kamgarpour, Na Li; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:559-570

Chance-constrained quasi-convex optimization with application to data-driven switched systems control

Guillaume O. Berger, Raphaël M. Jungers, Zheming Wang; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:571-583

Control of Unknown (Linear) Systems with Receding Horizon Learning

Christian Ebenbauer, Fabian Pfitz, Shuyou Yu; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:584-596

Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems

Jingwei Zhang, Zhuoran Yang, Zhengyuan Zhou, Zhaoran Wang; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:597-598

Analysis of the Optimization Landscape of Linear Quadratic Gaussian (LQG) Control

Yujie Tang, Yang Zheng, Na Li; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:599-610

Physics-penalised Regularisation for Learning Dynamics Models with Contact

Gabriella Pizzuto, Michael Mistry; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:611-622

The Impact of Data on the Stability of Learning-Based Control

Armin Lederer, Alexandre Capone, Thomas Beckers, Jonas Umlauft, Sandra Hirche; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:623-635

Accelerated Learning with Robustness to Adversarial Regressors

Joseph E. Gaudio, Anuradha M. Annaswamy, José M. Moreu, Michael A. Bolender, Travis E. Gibson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:636-650

Stability and Identification of Random Asynchronous Linear Time-Invariant Systems

Sahin Lale, Oguzhan Teke, Babak Hassibi, Anima Anandkumar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:651-663

Learning Stabilizing Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory

Lenart Treven, Sebastian Curi, Mojmír Mutný, Andreas Krause; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:664-676

Training deep residual networks for uniform approximation guarantees

Matteo Marchi, Bahman Gharesifard, Paulo Tabuada; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:677-688

LEOC: A Principled Method in Integrating Reinforcement Learning and Classical Control Theory

Naifu Zhang, Nicholas Capel; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:689-701

Primal-dual Learning for the Model-free Risk-constrained Linear Quadratic Regulator

Feiran Zhao, Keyou You; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:702-714

Exploiting Sparsity for Neural Network Verification

Matthew Newton, Antonis Papachristodoulou; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:715-727

Uncertain-aware Safe Exploratory Planning using Gaussian Process and Neural Control Contraction Metric

Dawei Sun, Mohammad Javad Khojasteh, Shubhanshu Shekhar, Chuchu Fan; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:728-741

Stable Online Control of Linear Time-Varying Systems

Guannan Qu, Yuanyuan Shi, Sahin Lale, Anima Anandkumar, Adam Wierman; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:742-753

ARDL - A Library for Adaptive Robotic Dynamics Learning

Joshua Smith, Michael Mistry; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:754-766

Linear Regression over Networks with Communication Guarantees

Konstantinos Gatsis; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:767-778

Nested Mixture of Experts: Cooperative and Competitive Learning of Hybrid Dynamical System

Junhyeok Ahn, Luis Sentis; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:779-790

Learning without Knowing: Unobserved Context in Continuous Transfer Reinforcement Learning

Chenyu Liu, Yan Zhang, Yi Shen, Michael M. Zavlanos; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:791-802

Data-Driven Abstraction of Monotone Systems

Anas Makdesi, Antoine Girard, Laurent Fribourg; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:803-814

Reward Biased Maximum Likelihood Estimation for Reinforcement Learning

Akshay Mete, Rahul Singh, Xi Liu, P. R. Kumar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:815-827

Feedback from Pixels: Output Regulation via Learning-based Scene View Synthesis

Murad Abu-Khalaf, Sertac Karaman, Daniela Rus; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:828-841

Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization

Navid Hashemi, Justin Ruths, Mahyar Fazlyab; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:842-853

Near-Optimal Data Source Selection for Bayesian Learning

Lintao Ye, Aritra Mitra, Shreyas Sundaram; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:854-865

Accelerated Concurrent Learning Algorithms via Data-Driven Hybrid Dynamics and Nonsmooth ODEs

Daniel E. Ochoa, Jorge I. Poveda, Anantharam Subbaraman, Gerd S. Schmidt, Farshad R. Pour-Safaei; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:866-878

Learning based attacks in Cyber Physical Systems: Exploration, Detection, and Control Cost trade-offs

Anshuka Rangi, Mohammad Javad Khojasteh, Massimo Franceschetti; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:879-892

Minimax Adaptive Control for a Finite Set of Linear Systems

Anders Rantzer; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:893-904

On exploration requirements for learning safety constraints

Pierre-François Massiani, Steve Heim, Sebastian Trimpe; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:905-916

Traffic Forecasting using Vehicle-to-Vehicle Communication

Steven Wong, Lejun Jiang, Robin Walters, Tamás G. Molnár, Gábor Orosz, Rose Yu; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:917-929

Learning the Dynamics of Time Delay Systems with Trainable Delays

Xunbi A. Ji, Tamás G. Molnár, Sergei S. Avedisov, Gábor Orosz; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:930-942

Decoupling dynamics and sampling: RNNs for unevenly sampled data and flexible online predictions

Signe Moe, Camilla Sterud; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:943-953

How Are Learned Perception-Based Controllers Impacted by the Limits of Robust Control?

Jingxi Xu, Bruce Lee, Nikolai Matni, Dinesh Jayaraman; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:954-966

Finite-time System Identification and Adaptive Control in Autoregressive Exogenous Systems

Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:967-979

Automating Discovery of Physics-Informed Neural State Space Models via Learning and Evolution

Elliott Skomski, Ján Drgoňa, Aaron Tuor; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:980-991

Offset-free setpoint tracking using neural network controllers

Patricia Pauli, Johannes Köhler, Julian Berberich, Anne Koch, Frank Allgöwer; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:992-1003

Maximum Likelihood Signal Matrix Model for Data-Driven Predictive Control

Mingzhou Yin, Andrea Iannelli, Roy S. Smith; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1004-1014

KPC: Learning-Based Model Predictive Control with Deterministic Guarantees

Emilio T. Maddalena, Paul Scharnhorst, Yuning Jiang, Colin N. Jones; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1015-1026

Contraction $\\mathcal{L}_1$-Adaptive Control using Gaussian Processes

Aditya Gahlawat, Arun Lakshmanan, Lin Song, Andrew Patterson, Zhuohuan Wu, Naira Hovakimyan, Evangelos A. Theodorou; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1027-1040

Episodic Learning for Safe Bipedal Locomotion with Control Barrier Functions and Projection-to-State Safety

Noel Csomay-Shanklin, Ryan K. Cosner, Min Dai, Andrew J. Taylor, Aaron D. Ames; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1041-1053

Faster Policy Learning with Continuous-Time Gradients

Samuel Ainsworth, Kendall Lowrey, John Thickstun, Zaid Harchaoui, Siddhartha Srinivasa; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1054-1067

Learning How to Solve “Bubble Ball”

Hotae Lee, Monimoy Bujarbaruah, Francesco Borrelli; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1068-1079

Approximate Midpoint Policy Iteration for Linear Quadratic Control

Benjamin Gravell, Iman Shames, Tyler Summers; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1080-1092

Safe Reinforcement Learning Using Robust Action Governor

Yutong Li, Nan Li, H. Eric Tseng, Anouck Girard, Dimitar Filev, Ilya Kolmanovsky; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1093-1104

SEAGuL: Sample Efficient Adversarially Guided Learning of Value Functions

Benoit Landry, Hongkai Dai, Marco Pavone; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1105-1117

Fast Stochastic Kalman Gradient Descent for Reinforcement Learning

Simone Totaro, Anders Jonsson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1118-1129

Domain Adaptation Using System Invariant Dynamics Models

Sean J. Wang, Aaron M. Johnson; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1130-1141

Forced Variational Integrator Networks for Prediction and Control of Mechanical Systems

Aaron Havens, Girish Chowdhary; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1142-1153

Offline Reinforcement Learning from Images with Latent Space Models

Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1154-1168

Adaptive Sampling for Estimating Distributions: A Bayesian Upper Confidence Bound Approach

Dhruva Kartik, Neeraj Sood, Urbashi Mitra, Tara Javidi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1169-1179

A New Objective for Identification of Partially Observed Linear Time-Invariant Dynamical Systems from Input-Output Data

Nicholas Galioto, Alex Arkady Gorodetsky; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1180-1191

Generating Adversarial Disturbances for Controller Verification

Udaya Ghai, David Snyder, Anirudha Majumdar, Elad Hazan; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1192-1204

Optimal Cost Design for Model Predictive Control

Avik Jain, Lawrence Chan, Daniel S. Brown, Anca D. Dragan; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1205-1217

Benchmarking Energy-Conserving Neural Networks for Learning Dynamics from Data

Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1218-1229

Learning Visually Guided Latent Actions for Assistive Teleoperation

Siddharth Karamcheti, Albert J. Zhai, Dylan P. Losey, Dorsa Sadigh; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1230-1241

Robust Reinforcement Learning: A Constrained Game-theoretic Approach

Jing Yu, Clement Gehring, Florian Schäfer, Animashree Anandkumar; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1242-1254

Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach

Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1255-1269

Regret-optimal measurement-feedback control

Gautam Goel, Babak Hassibi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1270-1280

Learning Finite-Dimensional Representations For Koopman Operators

Mohammad Khosravi; Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1281-1281

subscribe via RSS