[edit]

Volume 120: Learning for Dynamics and Control, 10-11 June 2020, The Cloud

[edit]

Editors: Alexandre M. Bayen, Ali Jadbabaie, George Pappas, Pablo A. Parrilo, Benjamin Recht, Claire Tomlin, Melanie Zeilinger

[bib][citeproc]

Preface

Alexandre M. Bayen, Ali Jadbabaie, George Pappas, Pablo A. Parrilo, Benjamin Recht, Claire Tomlin, Melanie Zeilinger ; PMLR 120:1-4

Actively Learning Gaussian Process Dynamics

Mona Buisson-Fenet, Friedrich Solowjow, Sebastian Trimpe ; PMLR 120:5-15

Finite Sample System Identification: Optimal Rates and the Role of Regularization

Yue Sun, Samet Oymak, Maryam Fazel ; PMLR 120:16-25

Finite-Time Performance of Distributed Two-Time-Scale Stochastic Approximation

Thinh Doan, Justin Romberg ; PMLR 120:26-36

Virtual Reference Feedback Tuning with data-driven reference model selection

Valentina Breschi, Simone Formentin ; PMLR 120:37-45

Direct data-driven control with embedded anti-windup compensation

Valentina Breschi, Simone Formentin ; PMLR 120:46-54

Sparse and Low-bias Estimation of High Dimensional Vector Autoregressive Models

Trevor Ruiz, Sharmodeep Bhattacharyya, Mahesh Balasubramanian, Kristofer Bouchard ; PMLR 120:55-64

Robust Online Model Adaptation by Extended Kalman Filter with Exponential Moving Average and Dynamic Multi-Epoch Strategy

Abulikemu Abuduweili, Changliu Liu ; PMLR 120:65-74

Estimating Reachable Sets with Scenario Optimization

Alex Devonport, Murat Arcak ; PMLR 120:75-84

LSTM Neural Networks: Input to State Stability and Probabilistic Safety Verification

Fabio Bonassi, Enrico Terzi, Marcello Farina, Riccardo Scattolini ; PMLR 120:85-94

Bayesian joint state and parameter tracking in autoregressive models

Ismail Senoz, Albert Podusenko, Wouter M. Kouw, Bert Vries ; PMLR 120:95-104

Learning to Correspond Dynamical Systems

Nam Hee Kim, Zhaoming Xie, Michiel Panne ; PMLR 120:105-117

Learning solutions to hybrid control problems using Benders cuts

Sandeep Menta, Joseph Warrington, John Lygeros, Manfred Morari ; PMLR 120:118-126

Feed-forward Neural Network with Trainable Delay and Application to Car-following

Xunbi Ji, Sergei Avedisov, Tama Molnar, Gabor Orosz ; PMLR 120:127-136

Exploiting Model Sparsity in Adaptive MPC: A Compressed Sensing Viewpoint

Monimoy Bujarbaruah, Charlott Vallon ; PMLR 120:137-146

Structured Variational Inference in Partially Observable UnstableGaussian Process State Space Models

Sebastian Curi, Silvan Melchior, Felix Berkenkamp, Andreas Krause ; PMLR 120:147-157

Regret Bound for Safe Gaussian Process Bandit Optimization

Sanae Amani, Mahnoosh Alizadeh, Christos Thrampoulidis ; PMLR 120:158-159

Smart Forgetting for Safe Online Learning with Gaussian Processes

Jonas Umlauft, Thomas Beckers, Alexandre Capone, Armin Lederer, Sandra Hirche ; PMLR 120:160-169

Linear Antisymmetric Recurrent Neural Networks

Signe Moe, Filippo Remonato, Esten Ingar Grøtli, Jan Tommy Gravdahl ; PMLR 120:170-178

Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence

Kaiqing Zhang, Bin Hu, Tamer Basar ; PMLR 120:179-190

A Finite-Sample Deviation Bound for Stable Autoregressive Processes

Rodrigo A. González, Cristian R. Rojas ; PMLR 120:191-200

Online Data Poisoning Attacks

Xuezhou Zhang, Xiaojin Zhu, Laurent Lessard ; PMLR 120:201-210

Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot

Napat Karnchanachari, Miguel Iglesia Valls, David Hoeller, Marco Hutter ; PMLR 120:211-224

Learning Constrained Dynamics with Gauss’ Principle adhering Gaussian Processes

Andreas Geist, Sebastian Trimpe ; PMLR 120:225-234

Counterfactual Programming for Optimal Control

Luiz F.O. Chamon, Santiago Paternain, Alejandro Ribeiro ; PMLR 120:235-244

Learning Navigation Costs from Demonstrations with Semantic Observations

Tianyu Wang, Vikas Dhiman, Nikolay Atanasov ; PMLR 120:245-255

Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems

Guannan Qu, Adam Wierman, Na Li ; PMLR 120:256-266

Black-box continuous-time transfer function estimation with stability guarantees: a kernel-based approach

Mirko Mazzoleni, Matteo Scandella, Simone Formentin, Fabio Previdi ; PMLR 120:267-276

Model-Predictive Planning via Cross-Entropy and Gradient-Based Optimization

Homanga Bharadhwaj, Kevin Xie, Florian Shkurti ; PMLR 120:277-286

Learning the Globally Optimal Distributed LQ Regulator

Luca Furieri, Yang Zheng, Maryam Kamgarpour ; PMLR 120:287-297

VarNet: Variational Neural Networks for the Solution of Partial Differential Equations

Reza Khodayi-Mehr, Michael Zavlanos ; PMLR 120:298-307

Tractable Reinforcement Learning of Signal Temporal Logic Objectives

Harish Venkataraman, Derya Aksaray, Peter Seiler ; PMLR 120:308-317

A Spatially and Temporally Attentive Joint Trajectory Prediction Framework for Modeling Vessel Intent

Jasmine Sekhon, Cody Fleming ; PMLR 120:318-327

Structured Mechanical Models for Robot Learning and Control

Jayesh K. Gupta, Kunal Menda, Zachary Manchester, Mykel Kochenderfer ; PMLR 120:328-337

Data-driven Identification of Approximate Passive Linear Models for Nonlinear Systems

S Sivaranjani, Etika Agarwal, Vijay Gupta ; PMLR 120:338-339

Constraint Management for Batch Processes Using Iterative Learning Control and Reference Governors

Aidan Laracy, Hamid Ossareh ; PMLR 120:340-349

Robust Guarantees for Perception-Based Control

Sarah Dean, Nikolai Matni, Benjamin Recht, Vickie Ye ; PMLR 120:350-360

Learning Convex Optimization Control Policies

Akshay Agrawal, Shane Barratt, Stephen Boyd, Bartolomeo Stellato ; PMLR 120:361-373

Fitting a Linear Control Policy to Demonstrations with a Kalman Constraint

Malayandi Palan, Shane Barratt, Alex McCauley, Dorsa Sadigh, Vikas Sindhwani, Stephen Boyd ; PMLR 120:374-383

Universal Simulation of Dynamical Systems by Recurrent Neural Nets

Joshua Hanson, Maxim Raginsky ; PMLR 120:384-392

Contracting Implicit Recurrent Neural Networks: Stable Models with Improved Trainability

Max Revay, Ian Manchester ; PMLR 120:393-403

On the Robustness of Data-Driven Controllers for Linear Systems

Rajasekhar Anguluri, Abed Alrahman Al Makdah, Vaibhav Katewa, Fabio Pasqualetti ; PMLR 120:404-412

Faster saddle-point optimization for solving large-scale Markov decision processes

Joan Bas Serrano, Gergely Neu ; PMLR 120:413-423

On Simulation and Trajectory Prediction with Gaussian Process Dynamics

Lukas Hewing, Elena Arcari, Lukas P. Fröhlich, Melanie N. Zeilinger ; PMLR 120:424-434

Sample Complexity of Kalman Filtering for Unknown Systems

Anastasios Tsiamis, Nikolai Matni, George Pappas ; PMLR 120:435-444

NeurOpt: Neural network based optimization for building energy management and climate control

Achin Jain, Francesco Smarra, Enrico Reticcioli, Alessandro D’Innocenzo, Manfred Morari ; PMLR 120:445-454

Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling

Kim Peter Wabersich, Melanie Zeilinger ; PMLR 120:455-464

Parameter Optimization for Learning-based Control of Control-Affine Systems

Armin Lederer, Alexandre Capone, Sandra Hirche ; PMLR 120:465-475

Riccati updates for online linear quadratic control

Mohammad Akbari, Bahman Gharesifard, Tamas Linder ; PMLR 120:476-485

A Theoretical Analysis of Deep Q-Learning

Zhuoran Yang, Yuchen Xie, Zhaoran Wang ; PMLR 120:486-489

Localized active learning of Gaussian process state space models

Alexandre Capone, Gerrit Noske, Jonas Umlauft, Thomas Beckers, Armin Lederer, Sandra Hirche ; PMLR 120:490-499

Generating Robust Supervision for Learning-Based Visual Navigation Using Hamilton-Jacobi Reachability

Anjian Li, Somil Bansal, Georgios Giovanis, Varun Tolani, Claire Tomlin, Mo Chen ; PMLR 120:500-510

Learning supported Model Predictive Control for Tracking of Periodic References

Janine Matschek, Rolf Findeisen ; PMLR 120:511-520

Data-driven distributionally robust LQR with multiplicative noise

Peter Coppens, Mathijs Schuurmans, Panagiotis Patrinos ; PMLR 120:521-530

Learning the model-free linear quadratic regulator via random search

Hesameddin Mohammadi, Mihailo R. Jovanovic’, Mahdi Soltanolkotabi ; PMLR 120:531-539

Lambda-Policy Iteration with Randomization for Contractive Models with Infinite Policies: Well-Posedness and Convergence

Yuchao Li, Karl Henrik Johansson, Jonas Mårtensson ; PMLR 120:540-549

Optimistic robust linear quadratic dual control

Jack Umenberger, Thomas B. Schön ; PMLR 120:550-560

Bayesian Learning with Adaptive Load Allocation Strategies

Manxi Wu, Saurabh Amin, Asuman Ozdaglar ; PMLR 120:561-570

Learning-based Stochastic Model Predictive Control with State-Dependent Uncertainty

Angelo Domenico Bonzanini, Ali Mesbah ; PMLR 120:571-580

Stable Reinforcement Learning with Unbounded State Space

Devavrat Shah, Qiaomin Xie, Zhi Xu ; PMLR 120:581-581

Periodic Q-Learning

Donghwan Lee, Niao He ; PMLR 120:582-598

Robust Learning-Based Control via Bootstrapped Multiplicative Noise

Benjamin Gravell, Tyler Summers ; PMLR 120:599-607

Robust Regression for Safe Exploration in Control

Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, Yisong Yue ; PMLR 120:608-619

Constrained Upper Confidence Reinforcement Learning

Liyuan Zheng, Lillian Ratliff ; PMLR 120:620-629

Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable Dynamical Systems

Muhammad Asif Rana, Anqi Li, Dieter Fox, Byron Boots, Fabio Ramos, Nathan Ratliff ; PMLR 120:630-639

Planning from Images with Deep Latent Gaussian Process Dynamics

Nathanael Bosch, Jan Achterhold, Laura Leal-Taixé, Jörg Stückler ; PMLR 120:640-650

A First Principles Approach for Data-Efficient System Identification of Spring-Rod Systems via Differentiable Physics Engines

Kun Wang, Mridul Aanjaneya, Kostas Bekris ; PMLR 120:651-665

Model-Based Reinforcement Learning with Value-Targeted Regression

Zeyu Jia, Lin Yang, Csaba Szepesvari, Mengdi Wang ; PMLR 120:666-686

Localized Learning of Robust Controllers for Networked Systems with Dynamic Topology

Soojean Han ; PMLR 120:687-696

NeuralExplorer: State Space Exploration of Closed Loop Control Systems Using Neural Networks

Manish Goyal, Parasara Sridhar Duggirala ; PMLR 120:697-697

Toward fusion plasma scenario planning for NSTX-U using machine-learning-accelerated models

Mark Boyer ; PMLR 120:698-707

Learning for Safety-Critical Control with Control Barrier Functions

Andrew Taylor, Andrew Singletary, Yisong Yue, Aaron Ames ; PMLR 120:708-717

Learning Dynamical Systems with Side Information

Amir Ali Ahmadi, Bachir El Khadir ; PMLR 120:718-727

Feynman-Kac Neural Network Architectures for Stochastic Control Using Second-Order FBSDE Theory

Marcus Pereira, Ziyi Wang, Tianrong Chen, Emily Reed, Evangelos Theodorou ; PMLR 120:728-738

Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time

Jeongho Kim, Insoon Yang ; PMLR 120:739-748

Identifying Mechanical Models of Unknown Objects with Differentiable Physics Simulations

Changkyu Song, Abdeslam Boularias ; PMLR 120:749-760

Objective Mismatch in Model-based Reinforcement Learning

Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra ; PMLR 120:761-770

Tools for Data-driven Modeling of Within-Hand Manipulation with Underactuated Adaptive Hands

Avishai Sintov, Andrew Kimmel, Bowen Wen, Abdeslam Boularias, Kostas Bekris ; PMLR 120:771-780

Probabilistic Safety Constraints for Learned High Relative Degree System Dynamics

Mohammad Javad Khojasteh, Vikas Dhiman, Massimo Franceschetti, Nikolay Atanasov ; PMLR 120:781-792

Lyceum: An efficient and scalable ecosystem for robot learning

Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov ; PMLR 120:793-803

Encoding Physical Constraints in Differentiable Newton-Euler Algorithm

Giovanni Sutanto, Austin Wang, Yixin Lin, Mustafa Mukadam, Gaurav Sukhatme, Akshara Rai, Franziska Meier ; PMLR 120:804-813

Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach

Yingying Li, Yujie Tang, Runyu Zhang, Na Li ; PMLR 120:814-814

Learning to Plan via Deep Optimistic Value Exploration

Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus ; PMLR 120:815-825

L1-GP: L1 Adaptive Control with Bayesian Learning

Aditya Gahlawat, Pan Zhao, Andrew Patterson, Naira Hovakimyan, Evangelos Theodorou ; PMLR 120:826-837

Data-Driven Distributed Predictive Control via Network Optimization

Ahmed Allibhoy, Jorge Cortes ; PMLR 120:838-839

Information Theoretic Model Predictive Q-Learning

Mohak Bhardwaj, Ankur Handa, Dieter Fox, Byron Boots ; PMLR 120:840-850

Learning nonlinear dynamical systems from a single trajectory

Dylan Foster, Tuhin Sarkar, Alexander Rakhlin ; PMLR 120:851-861

A Duality Approach for Regret Minimization in Average-Award Ergodic Markov Decision Processes

Hao Gong, Mengdi Wang ; PMLR 120:862-883

Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas ; PMLR 120:884-893

Dual Stochastic MPC for Systems with Parametric and Structural Uncertainty

Elena Arcari, Lukas Hewing, Max Schlichting, Melanie Zeilinger ; PMLR 120:894-903

Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation

Hany Abdulsamad, Jan Peters ; PMLR 120:904-914

A Kernel Mean Embedding Approach to Reducing Conservativeness in Stochastic Programming and Control

Jia-Jie Zhu, Bernhard Schoelkopf, Moritz Diehl ; PMLR 120:915-923

Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions

Amrit Singh Bedi, Dheeraj Peddireddy, Vaneet Aggarwal, Alec Koppel ; PMLR 120:924-934

Plan2Vec: Unsupervised Representation Learning by Latent Plans

Ge Yang, Amy Zhang, Ari Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra ; PMLR 120:935-946

Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems

Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud ; PMLR 120:947-957

Improving Robustness via Risk Averse Distributional Reinforcement Learning

Rahul Singh, Qinsheng Zhang, Yongxin Chen ; PMLR 120:958-968

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning

Karl Pertsch, Oleh Rybkin, Jingyun Yang, Konstantinos Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle ; PMLR 120:969-979

Safe non-smooth black-box optimization with application to policy search

Ilnura Usmanova, Andreas Krause, Maryam Kamgarpour ; PMLR 120:980-989

Improving Input-Output Linearizing Controllers for Bipedal Robots via Reinforcement Learning

Fernando Castañeda, Mathias Wulfman, Ayush Agrawal, Tyler Westenbroek, Shankar Sastry, Claire Tomlin, Koushil Sreenath ; PMLR 120:990-999

Uncertain multi-agent MILPs: A data-driven decentralized solution with probabilistic feasibility guarantees

Alessandro Falsone, Federico Molinari, Maria Prandini ; PMLR 120:1000-1009

subscribe via RSS

This site last compiled Thu, 06 Aug 2020 15:22:21 +0000
Github Account Copyright © PMLR 2020. All rights reserved.