[edit]

Volume 242: 6th Annual Learning for Dynamics & Control Conference, 15-17 July 2024, University of Oxford, Oxford, UK

[edit]

Editors: Alessandro Abate, Mark Cannon, Kostas Margellos, Antonis Papachristodoulou

[bib][citeproc]

Leveraging Hamilton-Jacobi PDEs with time-dependent Hamiltonians for continual scientific machine learning

Paula Chen, Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1-12

Data-efficient, explainable and safe box manipulation: Illustrating the advantages of physical priors in model-predictive control

Achkan Salehi, Stephane Doncieux; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:13-24

Gradient shaping for multi-constraint safe reinforcement learning

Yihang Yao, Zuxin Liu, Zhepeng Cen, Peide Huang, Tingnan Zhang, Wenhao Yu, Ding Zhao; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:25-39

Continual learning of multi-modal dynamics with external memory

Abdullah Akgül, Gozde Unal, Melih Kandemir; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:40-51

Learning to stabilize high-dimensional unknown systems using Lyapunov-guided exploration

Songyuan Zhang, Chuchu Fan; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:52-67

An investigation of time reversal symmetry in reinforcement learning

Brett Barkley, Amy Zhang, David Fridovich-Keil; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:68-79

HSVI-based online minimax strategies for partially observable stochastic games with neural perception mechanisms

Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:80-91

Real-time safe control of neural network dynamic models with sound approximation

Hanjiang Hu, Jianglin Lan, Changliu Liu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:92-103

Tracking object positions in reinforcement learning: A metric for keypoint detection

Emma Cramer, Jonas Reiher, Sebastian Trimpe; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:104-116

Linearised data-driven LSTM-based control of multi-input HVAC systems

Andreas Hinderyckx, Florence Guillaume; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:117-129

The behavioral toolbox

Ivan Markovsky; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:130-141

Learning “look-ahead” nonlocal traffic dynamics in a ring road

Chenguang Zhao, Huan Yu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:142-154

Safe dynamic pricing for nonstationary network resource allocation

Berkay Turan, Spencer Hutchinson, Mahnoosh Alizadeh; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:155-167

Safe online convex optimization with multi-point feedback

Spencer Hutchinson, Mahnoosh Alizadeh; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:168-180

Controlgym: Large-scale control environments for benchmarking reinforcement learning algorithms

Xiangyuan Zhang, Weichao Mao, Saviz Mowlavi, Mouhacine Benosman, Tamer Başar; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:181-196

On the convergence of adaptive first order methods: Proximal gradient and alternating minimization algorithms

Puya Latafat, Andreas Themelis, Panagiotis Patrinos; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:197-208

Strengthened stability analysis of discrete-time Lurie systems involving ReLU neural networks

Carl Richardson, Matthew Turner, Steve Gunn, Ross Drummond; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:209-221

Interpretable data-driven model predictive control of building energy systems using SHAP

Patrick Henkel, Tobias Kasperski, Phillip Stoffel, Dirk Müller; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:222-234

Physics-informed Neural Networks with Unknown Measurement Noise

Philipp Pilar, Niklas Wahlström; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:235-247

Adaptive online non-stochastic control

Naram Mhaisen, George Iosifidis; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:248-259

Global rewards in multi-agent deep reinforcement learning for autonomous mobility on demand systems

Heiko Hoppe, Tobias Enders, Quentin Cappart, Maximilian Schiffer; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:260-272

Soft convex quantization: revisiting Vector Quantization with convex optimization

Tanmay Gautam, Reid Pryzant, Ziyi Yang, Chenguang Zhu, Somayeh Sojoudi; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:273-285

Uncertainty quantification of set-membership estimation in control and perception: Revisiting the minimum enclosing ellipsoid

Yukai Tang, Jean-Bernard Lasserre, Heng Yang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:286-298

Minimax dual control with finite-dimensional information state

Olle Kjellqvist; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:299-311

An efficient data-based off-policy Q-learning algorithm for optimal output feedback control of linear systems

Mohammad Alsalti, Victor G. Lopez, Matthias A. Müller; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:312-323

Adapting image-based RL policies via predicted rewards

Weiyao Wang, Xinyuan Fang, Gregory Hager; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:324-336

Piecewise regression via mixed-integer programming for MPC

Dieter Teichrib, Moritz Schulze Darup; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:337-348

Parameter-adaptive approximate MPC: Tuning neural-network controllers without retraining

Henrik Hose, Alexander Gräfe, Sebastian Trimpe; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:349-360

$\widetilde{O}(T^{-1})$ Convergence to (coarse) correlated equilibria in full-information general-sum Markov games

Weichao Mao, Haoran Qiu, Chen Wang, Hubertus Franke, Zbigniew Kalbarczyk, Tamer Başar; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:361-374

Inverse optimal control as an errors-in-variables problem

Rahel Rickenbach, Anna Scampicchio, Melanie N. Zeilinger; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:375-386

Learning soft constrained MPC value functions: Efficient MPC design and implementation providing stability and safety guarantees

Nicolas Chatzikiriakos, Kim Peter Wabersich, Felix Berkel, Patricia Pauli, Andrea Iannelli; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:387-398

MPC-inspired reinforcement learning for verifiable model-free control

Yiwen Lu, Zishuo Li, Yihan Zhou, Na Li, Yilin Mo; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:399-413

Real-world fluid directed rigid body control via deep reinforcement learning

Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin Riedmiller, Jonas Buchli; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:414-427

On the uniqueness of solution for the Bellman equation of LTL objectives

Zetong Xuan, Alper Bozkurt, Miroslav Pajic, Yu Wang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:428-439

Decision boundary learning for safe vision-based navigation via Hamilton-Jacobi reachability analysis and support vector machine

Tara Toufighi, Minh Bui, Rakesh Shrestha, Mo Chen; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:440-452

Understanding the difficulty of solving Cauchy problems with PINNs

Tao Wang, Bo Zhao, Sicun Gao, Rose Yu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:453-465

Signatures meet dynamic programming: Generalizing Bellman equations for trajectory following

Motoya Ohnishi, Iretiayo Akinola, Jie Xu, Ajay Mandlekar, Fabio Ramos; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:466-479

Online decision making with history-average dependent costs

Vijeth Hebbar, Cedric Langbort; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:480-491

Learning-based rigid tube model predictive control

Yulong Gao, Shuhao Yan, Jian Zhou, Mark Cannon, Alessandro Abate, Karl Henrik Johansson; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:492-503

A data-driven Riccati equation

Anders Rantzer; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:504-513

Nonconvex scenario optimization for data-driven reachability

Elizabeth Dietrich, Alex Devonport, Murat Arcak; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:514-527

Uncertainty quantification and robustification of model-based controllers using conformal prediction

Kong Yao Chee, Thales C. Silva, M. Ani Hsieh, George J. Pappas; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:528-540

Learning for CasADi: Data-driven Models in Numerical Optimization

Tim Salzmann, Jon Arrizabalaga, Joel Andersson, Marco Pavone, Markus Ryll; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:541-553

Neural operators for boundary stabilization of stop-and-go traffic

Yihuai Zhang, Ruiguo Zhong, Huan Yu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:554-565

Submodular information selection for hypothesis testing with misclassification penalties

Jayanth Bhargav, Mahsa Ghasemi, Shreyas Sundaram; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:566-577

Learning and deploying robust locomotion policies with minimal dynamics randomization

Luigi Campanaro, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis Havoutis; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:578-590

Learning flow functions of spiking systems

Miguel Aguiar, Amritam Das, Karl H. Johansson; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:591-602

Safe learning in nonlinear model predictive control

Johannes Buerger, Mark Cannon, Martin Doff-Sotta; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:603-614

Efficient skill acquisition for insertion tasks in obstructed environments

Jun Yamada, Jack Collins, Ingmar Posner; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:615-627

Balanced reward-inspired reinforcement learning for autonomous vehicle racing

Zhen Tian, Dezong Zhao, Zhihao Lin, David Flynn, Wenjing Zhao, Daxin Tian; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:628-640

An invariant information geometric method for high-dimensional online optimization

Zhengfei Zhang, Yunyue Wei, Yanan Sui; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:641-653

On the nonsmooth geometry and neural approximation of the optimal value function of infinite-horizon pendulum swing-up

Haoyu Han, Heng Yang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:654-666

Data-driven robust covariance control for uncertain linear systems

Joshua Pilipovsky, Panagiotis Tsiotras; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:667-678

Combining model-based controller and ML advice via convex reparameterization

Junxuan Shen, Adam Wierman, Guannan Qu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:679-693

Pointwise-in-time diagnostics for reinforcement learning during training and runtime

Noel Brindise, Andres Posada Moreno, Cedric Langbort, Sebastian Trimpe; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:694-706

Expert with clustering: Hierarchical online preference learning framework

Tianyue Zhou, Jung-Hoon Cho, Babak Rahimi Ardabili, Hamed Tabkhi, Cathy Wu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:707-718

Verification of neural reachable tubes via scenario optimization and conformal prediction

Albert Lin, Somil Bansal; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:719-731

Random features approximation for control-affine systems

Kimia Kazemian, Yahya Sattar, Sarah Dean; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:732-744

Hacking predictors means hacking cars: Using sensitivity analysis to identify trajectory prediction vulnerabilities for autonomous driving security

Marsalis Gibson, David Babazadeh, Claire Tomlin, Shankar Sastry; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:745-757

Rademacher complexity of neural ODEs via Chen-Fliess series

Joshua Hanson, Maxim Raginsky; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:758-769

Robust cooperative multi-agent reinforcement learning: A mean-field type game perspective

Muhammad Aneeq Uz Zaman, Mathieu Laurière, Alec Koppel, Tamer Başar; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:770-783

Learning $\epsilon$-Nash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent

Tiancheng Qin, S. Rasoul Etesami; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:784-795

Uncertainty informed optimal resource allocation with Gaussian process based Bayesian inference

Samarth Gupta, Saurabh Amin; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:796-812

Improving sample efficiency of high dimensional Bayesian optimization with MCMC

Zeji Yi, Yunyue Wei, Chu Xin Cheng, Kaibo He, Yanan Sui; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:813-824

SpOiLer: Offline reinforcement learning using scaled penalties

Padmanaba Srinivasan, William J. Knottenbelt; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:825-838

Towards safe multi-task Bayesian optimization

Jannis Lübsen, Christian Hespe, Annika Eichler; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:839-851

Mixing classifiers to alleviate the accuracy-robustness trade-off

Yatong Bai, Brendon G. Anderson, Somayeh Sojoudi; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:852-865

Design of observer-based finite-time control for inductively coupled power transfer system with random gain fluctuations

Satheesh Thangavel, Sakthivel Rathinasamy; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:866-875

Learning robust policies for uncertain parametric Markov decision processes

Luke Rickard, Alessandro Abate, Kostas Margellos; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:876-889

Conditions for parameter unidentifiability of linear ARX systems for enhancing security

Xiangyu Mao, Jianping He, Chengpu Yu, Chongrong Fang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:890-901

Meta-learning linear quadratic regulators: a policy gradient MAML approach for model-free LQR

Leonardo Felipe Toso, Donglin Zhan, James Anderson, Han Wang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:902-915

A large deviations perspective on policy gradient algorithms

Wouter Jongeneel, Daniel Kuhn, Mengmeng Li; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:916-928

Deep model-free KKL observer: A switching approach

Johan Peralez, Madiha Nadri; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:929-940

In vivo learning-based control of microbial populations density in bioreactors

Sara Maria Brancato, Davide Salzano, Francesco De Lellis, Davide Fiore, Giovanni Russo, Mario di Bernardo; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:941-953

Bounded robustness in reinforcement learning via lexicographic objectives

Daniel Jarne Ornia, Licio Romao, Lewis Hammond, Manuel Mazo Jr, Alessandro Abate; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:954-967

System-level safety guard: Safe tracking control through uncertain neural network dynamics models

Xiao Li, Yutong Li, Anouck Girard, Ilya Kolmanovsky; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:968-979

Nonasymptotic regret analysis of adaptive linear quadratic control with model misspecification

Bruce Lee, Anders Rantzer, Nikolai Matni; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:980-992

Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods

Feng-Yi Liao, Lijun Ding, Yang Zheng; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:993-1005

Parameterized fast and safe tracking (FaSTrack) using DeepReach

Hyun Joe Jeong, Zheng Gong, Somil Bansal, Sylvia Herbert; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1006-1017

Probabilistic ODE solvers for integration error-aware numerical optimal control

Amon Lahr, Filip Tronarp, Nathanael Bosch, Jonathan Schmidt, Philipp Hennig, Melanie N. Zeilinger; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1018-1032

Event-triggered safe Bayesian optimization on quadcopters

Antonia Holzapfel, Paul Brunzema, Sebastian Trimpe; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1033-1045

Finite-time complexity of incremental policy gradient methods for solving multi-task reinforcement learning

Yitao Bai, Thinh Doan; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1046-1057

Convergence guarantees for adaptive model predictive control with kinky inference

Riccardo Zuliani, Raffaele Soloperto, John Lygeros; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1058-1070

Convex approximations for a bi-level formulation of data-enabled predictive control

Xu Shang, Yang Zheng; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1071-1082

PDE control gym: A benchmark for data-driven boundary control of partial differential equations

Luke Bhan, Yuexin Bian, Miroslav Krstic, Yuanyuan Shi; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1083-1095

Towards bio-inspired control of aerial vehicle: Distributed aerodynamic parameters for state prediction

Yikang Wang, Adolfo Perrusquia, Dmitry Ignatyev; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1096-1106

Residual learning and context encoding for adaptive offline-to-online reinforcement learning

Mohammadreza Nakhaei, Aidan Scannell, Joni Pajarinen; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1107-1121

CoVO-MPC: Theoretical analysis of sampling-based MPC and optimal covariance design

Zeji Yi, Chaoyi Pan, Guanqi He, Guannan Qu, Guanya Shi; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1122-1135

Stable modular control via contraction theory for reinforcement learning

Bing Song, Jean-Jacques Slotine, Quang-Cuong Pham; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1136-1148

Data-driven bifurcation analysis via learning of homeomorphism

Wentao Tang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1149-1160

A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances

Nolan Fey, He Li, Nicholas Adrian, Patrick Wensing, Michael Lemmon; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1161-1173

On task-relevant loss functions in meta-reinforcement learning

Jaeuk Shin, Giho Kim, Howon Lee, Joonho Han, Insoon Yang; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1174-1186

State-wise safe reinforcement learning with pixel observations

Sinong Zhan, Yixuan Wang, Qingyuan Wu, Ruochen Jiao, Chao Huang, Qi Zhu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1187-1201

Multi-agent assignment via state augmented reinforcement learning

Leopoldo Agorio, Sean Van Alen, Miguel Calvo-Fullana, Santiago Paternain, Juan Andrés Bazerque; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1202-1213

PlanNetX: Learning an efficient neural network planner from MPC for longitudinal control

Jasper Hoffmann, Diego Fernandez Clausen, Julien Brosseit, Julian Bernhard, Klemens Esterle, Moritz Werling, Michael Karg, Joschka Joschka Bödecker; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1214-1227

Mapping back and forth between model predictive control and neural networks

Ross Drummond, Pablo Baldivieso, Giorgio Valmorbida; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1228-1240

A multi-modal distributed learning algorithm in reproducing kernel Hilbert spaces

Aneesh Raghavan, Karl Henrik Johansson; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1241-1252

Towards model-free LQR control over rate-limited channels

Aritra Mitra, Lintao Ye, Vijay Gupta; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1253-1265

Learning true objectives: Linear algebraic characterizations of identifiability in inverse reinforcement learning

Mohamad Louai Shehab, Antoine Aspeel, Nikos Arechiga, Andrew Best, Necmiye Ozay; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1266-1277

Safety filters for black-box dynamical systems by learning discriminating hyperplanes

Will Lavanakul, Jason Choi, Koushil Sreenath, Claire Tomlin; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1278-1291

Lagrangian inspired polynomial estimator for black-box learning and control of underactuated systems

Giulio Giacomuzzo, Riccardo Cescon, Diego Romeres, Ruggero Carli, Alberto Dalla Libera; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1292-1304

From raw data to safety: Reducing conservatism by set expansion

Mohammad Bajelani, Klaske Van Heusden; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1305-1317

Dynamics harmonic analysis of robotic systems: Application in data-driven Koopman modelling

Daniel Ordoñez-Apraez, Vladimir Kostic, Giulio Turrisi, Pietro Novelli, Carlos Mastalli, Claudio Semini, Massimilano Pontil; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1318-1329

Recursively feasible shrinking-horizon MPC in dynamic environments with conformal prediction guarantees

Charis Stamouli, Lars Lindemann, George Pappas; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1330-1342

Multi-modal conformal prediction regions by optimizing convex shape templates

Renukanandan Tumu, Matthew Cleaveland, Rahul Mangharam, George Pappas, Lars Lindemann; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1343-1356

Learning locally interacting discrete dynamical systems: Towards data-efficient and scalable prediction

Beomseok Kang, Harshit Kumar, Minah Lee, Biswadeep Chakraborty, Saibal Mukhopadhyay; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1357-1369

How safe am I given what I see? Calibrated prediction of safety chances for image-controlled autonomy

Zhenjiang Mao, Carson Sobolewski, Ivan Ruchkin; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1370-1387

Convex neural network synthesis for robustness in the 1-norm

Ross Drummond, Chris Guiver, Matthew Turner; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1388-1399

Increasing information for model predictive control with semi-Markov decision processes

Rémy Hosseinkhan Boucher, Stella Douka, Onofrio Semeraro, Lionel Mathelin; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1400-1414

Physically consistent modeling & identification of nonlinear friction with dissipative Gaussian processes

Rui Dai, Giulio Evangelisti, Sandra Hirche; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1415-1426

STEMFold: Stochastic temporal manifold for multi-agent interactions in the presence of hidden agents

Hemant Kumawat, Biswadeep Chakraborty, Saibal Mukhopadhyay; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1427-1439

Distributed on-the-fly control of multi-agent systems with unknown dynamics: Using limited data to obtain near-optimal control

Shayan Meshkat Alsadat, Nasim Baharisangari, Zhe Xu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1440-1451

CACTO-SL: Using Sobolev learning to improve continuous actor-critic with trajectory optimization

Elisa Alboni, Gianluigi Grandesso, Gastone Pietro Rosati Papini, Justin Carpentier, Andrea Del Prete; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1452-1463

Multi-agent coverage control with transient behavior consideration

Runyu Zhang, Haitong Ma, Na Li; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1464-1476

Data driven verification of positive invariant sets for discrete, nonlinear systems

Amy K. Strong, Leila J. Bridgeman; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1477-1488

Adaptive teaching in heterogeneous agents: Balancing surprise in sparse reward scenarios

Emma Clark, Kanghyun Ryu, Negar Mehr; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1489-1501

Can a transformer represent a Kalman filter?

Gautam Goel, Peter Bartlett; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1502-1512

Data-driven simulator for mechanical circulatory support with domain adversarial neural process

Sophia Sun, Wenyuan Chen, Zihao Zhou, Sonia Fereidooni, Elise Jortberg, Rose Yu; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1513-1525

DC4L: Distribution shift recovery via data-driven control for deep learning models

Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, Insup Lee; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1526-1538

QCQP-Net: Reliably learning feasible alternating current optimal power flow solutions under constraints

Sihan Zeng, Youngdae Kim, Yuxuan Ren, Kibaek Kim; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1539-1551

A deep learning approach for distributed aggregative optimization with users’ Feedback

Riccardo Brumali, Guido Carnevale, Giuseppe Notarstefano; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1552-1564

A framework for evaluating human driver models using neuroimaging

Christopher Strong, Kaylene Stocking, Jingqi Li, Tianjiao Zhang, Jack Gallant, Claire Tomlin; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1565-1578

Deep Hankel matrices with random elements

Nathan Lawrence, Philip Loewen, Shuyuan Wang, Michael Forbes, Bhushan Gopaluni; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1579-1591

Robust exploration with adversary via Langevin Monte Carlo

Hao-Lun Hsu, Miroslav Pajic; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1592-1605

Generalized constraint for probabilistic safe reinforcement learning

Weiqin Chen, Santiago Paternain; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1606-1618

Neural processes with event triggers for fast adaptation to changes

Paul Brunzema, Paul Kruse, Sebastian Trimpe; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1619-1632

Data-driven strategy synthesis for stochastic systems with unknown nonlinear disturbances

Ibon Gracia, Dimitris Boskos, Luca Laurenti, Morteza Lahijanian; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1633-1645

Growing Q-networks: Solving continuous control tasks with adaptive control resolution

Tim Seyde, Peter Werner, Wilko Schwarting, Markus Wulfmeier, Daniela Rus; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1646-1661

Hamiltonian GAN

Christine Allen-Blanchette; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1662-1674

Do no harm: A counterfactual approach to safe reinforcement learning

Sean Vaskov, Wilko Schwarting, Chris Baker; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1675-1687

Wasserstein distributionally robust regret-optimal control over infinite-horizon

Taylan Kargin, Joudi Hajar, Vikrant Malik, Babak Hassibi; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1688-1701

Probably approximately correct stability of allocations in uncertain coalitional games with private sampling

George Pantazis, Filiberto Fele, Filippo Fabiani, Sergio Grammatico, Kostas Margellos; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1702-1714

Reinforcement learning-driven parametric curve fitting for snake robot gait design

Jack Naish, Jacob Rodriguez, Jenny Zhang, Bryson Jones, Guglielmo Daddi, Andrew Orekhov, Rob Royce, Michael Paton, Howie Choset, Masahiro Ono, Rohan Thakker; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1715-1727

Pontryagin neural operator for solving general-sum differential games with parametric state constraints

Lei Zhang, Mukesh Ghimire, Zhe Xu, Wenlong Zhang, Yi Ren; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1728-1740

Adaptive neural network based control approach for building energy control under changing environmental conditions

Lilli Frison, Simon Gölzhäuser; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1741-1752

Physics-constrained learning for PDE systems with uncertainty quantified port-Hamiltonian models

Kaiyuan Tan, Peilun Li, Thomas Beckers; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1753-1764

Proto-MPC: An encoder-prototype-decoder approach for quadrotor control in challenging winds

Yuliang Gu, Sheng Cheng, Naira Hovakimyan; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1765-1776

Efficient imitation learning with conservative world models

Victor Kolev, Rafael Rafailov, Kyle Hatch, Jiajun Wu, Chelsea Finn; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1777-1790

Restless bandits with rewards generated by a linear Gaussian dynamical system

Jonathan Gornet, Bruno Sinopoli; Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1791-1802

subscribe via RSS