Volume 100: Conference on Robot Learning, 30-1 November 2019,

[edit]

Editors: Leslie Pack Kaelbling, Danica Kragic, Komei Sugiura

[bib][citeproc]

Data Efficient Reinforcement Learning for Legged Robots

Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani ; PMLR 100:1-10

To Follow or not to Follow: Selective Imitation Learning from Observations

Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Joseph J. Lim ; PMLR 100:11-23

On-Policy Robot Imitation Learning from a Converging Supervisor

Ashwin Balakrishna, Brijen Thananjeyan, Jonathan Lee, Felix Li, Arsh Zahed, Joseph E. Gonzalez, Ken Goldberg ; PMLR 100:24-41

Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation

Kuan Fang, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei ; PMLR 100:42-52

S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes

Yuzhe Qin, Rui Chen, Hao Zhu, Meng Song, Jing Xu, Hao Su ; PMLR 100:53-65

Learning by Cheating

Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl ; PMLR 100:66-75

Multimodal Attention Branch Network for Perspective-Free Sentence Generation

Aly Magassouba, Komei Sugiura, Hisashi Kawai ; PMLR 100:76-85

MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction

Yuning Chai, Benjamin Sapp, Mayank Bansal, Dragomir Anguelov ; PMLR 100:86-99

Object-centric Forward Modeling for Model Predictive Control

Yufei Ye, Dhiraj Gandhi, Abhinav Gupta, Shubham Tulsiani ; PMLR 100:100-109

Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real

Ofir Nachum, Michael Ahn, Hugo Ponte, Shixiang (Shane) Gu, Vikash Kumar ; PMLR 100:110-121

Combining Deep Learning and Verification for Precise Object Instance Detection

Siddharth Ancha, Junyu Nan, David Held ; PMLR 100:122-141

MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning

Bohan Wu, Iretiayo Akinola, Jacob Varley, Peter K. Allen ; PMLR 100:142-161

Curious iLQR: Resolving Uncertainty in Model-based RL

Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, Franziska Meier ; PMLR 100:162-171

Hybrid system identification using switching density networks

Michael Burke, Yordan Hristov, Subramanian Ramamoorthy ; PMLR 100:172-181

Regularizing Model-Based Planning with Energy-Based Models

Rinu Boney, Juho Kannala, Alexander Ilin ; PMLR 100:182-191

Semi-Supervised Learning of Decision-Making Models for Human-Robot Collaboration

Vaibhav V. Unhelkar, Shen Li, Julie A. Shah ; PMLR 100:192-203

Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping

Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan Ratliff ; PMLR 100:204-219

Perceptual Attention-based Predictive Control

Keuntaek Lee, Gabriel Nakajima An, Viacheslav Zakharov, Evangelos A. Theodorou ; PMLR 100:220-232

Bayesian Optimization Meets Riemannian Manifolds in Robot Learning

Noémie Jaquier, Leonel Rozo, Sylvain Calinon, Mathias Bürger ; PMLR 100:233-246

Learning from demonstration with model-based Gaussian process

Noémie Jaquier, David Ginsbourger, Sylvain Calinon ; PMLR 100:247-257

Variational Inference MPC for Bayesian Model-based Reinforcement Learning

Masashi Okada, Tadahiro Taniguchi ; PMLR 100:258-272

Optimizing Sequences of Probabilistic Manipulation Skills Learned from Demonstration

Lukas Schwenkel, Meng Guo, Mathias Bürger ; PMLR 100:273-282

Predictive Safety Network for Resource-constrained Multi-agent Systems

Meng Guo, Mathias Bürger ; PMLR 100:283-292

A correct formulation for the Orientation Dynamic Movement Primitives for robot control in the Cartesian space

Leonidas Koutras, Zoe Doulgeri ; PMLR 100:293-302

Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information

Dan Barnes, Rob Weston, Ingmar Posner ; PMLR 100:303-316

Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real

Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonanthan Hurst, Michiel Panne ; PMLR 100:317-329

Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations

Daniel S. Brown, Wonjoon Goo, Scott Niekum ; PMLR 100:330-359

Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning

Felix Leibfried, Jordi Grau-Moya ; PMLR 100:360-373

Model-Based Planning with Energy-Based Models

Yilun Du, Toru Lin, Igor Mordatch ; PMLR 100:374-383

Identifying Unknown Instances for Autonomous Driving

Kelvin Wong, Shenlong Wang, Mengye Ren, Ming Liang, Raquel Urtasun ; PMLR 100:384-393

Vision-and-Dialog Navigation

Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer ; PMLR 100:394-406

Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction

Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, Raquel Urtasun ; PMLR 100:407-419

Combining Optimal Control and Learning for Visual Navigation in Novel Environments

Somil Bansal, Varun Tolani, Saurabh Gupta, Jitendra Malik, Claire Tomlin ; PMLR 100:420-429

Leveraging exploration in off-policy algorithms via normalizing flows

Bogdan Mazoure, Thang Doan, Audrey Durand, Joelle Pineau, R Devon Hjelm ; PMLR 100:430-444

TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer

Adam Allevato, Elaine Schaertl Short, Mitch Pryor, Andrea Thomaz ; PMLR 100:445-455

Bayesian Optimization in Variational Latent Spaces with Dynamic Compression

Rika Antonova, Akshara Rai, Tianyu Li, Danica Kragic ; PMLR 100:456-465

A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots

Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam ; PMLR 100:466-489

Learning to Manipulate Object Collections Using Grounded State Representations

Matthew Wilson, Tucker Hermans ; PMLR 100:490-502

Robust Semi-Supervised Monocular Depth Estimation with Reprojected Distances

Vitor Guizilini, Jie Li, Rares Ambrus, Sudeep Pillai, Adrien Gaidon ; PMLR 100:503-512

Self-Paced Contextual Reinforcement Learning

Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters ; PMLR 100:513-529

Contextual Imagined Goals for Self-Supervised Robotic Learning

Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine ; PMLR 100:530-539

Conditional Driving from Natural Language Instructions

Junha Roh, Chris Paxton, Andrzej Pronobis, Ali Farhadi, Dieter Fox ; PMLR 100:540-551

Adversarial Active Exploration for Inverse Dynamics Model Learning

Zhang-Wei Hong, Tsu-Jui Fu, Tzu-Yun Shann, Chun-Yi Lee ; PMLR 100:552-565

Imagined Value Gradients: Model-Based Policy Optimization with Tranferable Latent Dynamics Models

Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller ; PMLR 100:566-589

PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning

Iou-Jen Liu, Raymond A. Yeh, Alexander G. Schwing ; PMLR 100:590-602

HRL4IN: Hierarchical Reinforcement Learning for Interactive Navigation with Mobile Manipulators

Chengshu Li, Fei Xia, Roberto Martín-Martín, Silvio Savarese ; PMLR 100:603-616

Learning Navigation Subroutines from Egocentric Videos

Ashish Kumar, Saurabh Gupta, Jitendra Malik ; PMLR 100:617-626

A Learnable Safety Measure

Steve Heim, Alexander Rohr, Sebastian Trimpe, Alexander Badri-Spröwitz ; PMLR 100:627-639

HJB Optimal Feedback Control with Deep Differential Value Functions and Action Constraints

Michael Lutter, Boris Belousov, Kim Listmann, Debora Clever, Jan Peters ; PMLR 100:640-650

Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light

Eunah Jung, Nan Yang, Daniel Cremers ; PMLR 100:651-660

Connectivity Guaranteed Multi-robot Navigation via Deep Reinforcement Learning

Juntong Lin, Xuyun Yang, Peiwei Zheng, Hui Cheng ; PMLR 100:661-670

Learning Decentralized Controllers for Robot Swarms with Graph Neural Networks

Ekaterina Tolstaya, Fernando Gama, James Paulos, George Pappas, Vijay Kumar, Alejandro Ribeiro ; PMLR 100:671-682

Provably Robust Blackbox Optimization for Reinforcement Learning

Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani ; PMLR 100:683-696

Stochastic Optimal Control as Approximate Input Inference

Joe Watson, Hany Abdulsamad, Jan Peters ; PMLR 100:697-716

AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers

Andrey Kurenkov, Ajay Mandlekar, Roberto Martin-Martin, Silvio Savarese, Animesh Garg ; PMLR 100:717-734

Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics

Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin Riedmiller ; PMLR 100:735-751

Learning from My Partner’s Actions: Roles in Decentralized Robot Teams

Dylan P. Losey, Mengxi Li, Jeannette Bohg, Dorsa Sadigh ; PMLR 100:752-765

Energy-efficient Path Planning for Ground Robots by and Combining Air and Ground Measurements

Minghan Wei, Volkan Isler ; PMLR 100:766-775

Multi-Agent Reinforcement Learning with Multi-Step Generative Models

Orr Krupnik, Igor Mordatch, Aviv Tamar ; PMLR 100:776-790

Learning to Navigate Using Mid-Level Visual Priors

Alexander Sax, Jeffrey O. Zhang, Bradley Emi, Amir Zamir, Silvio Savarese, Leonidas Guibas, Jitendra Malik ; PMLR 100:791-812

Learning Compact Models for Planning with Exogenous Processes

Rohan Chitnis, Tomás Lozano-Pérez ; PMLR 100:813-822

Graph Policy Gradients for Large Scale Robot Control

Arbaaz Khan, Ekaterina Tolstaya, Alejandro Ribeiro, Vijay Kumar ; PMLR 100:823-834

Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments

Rémy Portelas, Cédric Colas, Katja Hofmann, Pierre-Yves Oudeyer ; PMLR 100:835-853

Data-efficient Co-Adaptation of Morphology and Behaviour with Deep Reinforcement Learning

Kevin Sebastian Luck, Heni Ben Amor, Roberto Calandra ; PMLR 100:854-869

Disentangled Relational Representations for Explaining and Learning from Demonstration

Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy ; PMLR 100:870-884

RoboNet: Large-Scale Multi-Robot Learning

Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn ; PMLR 100:885-897

Counter-example Guided Learning of Bounds on Environment Behavior

Yuxiao Chen, Sumanth Dathathri, Tung Phan-Minh, Richard M. Murray ; PMLR 100:898-909

MAME : Model-Agnostic Meta-Exploration

Swaminathan Gurumurthy, Sumit Kumar, Katia Sycara ; PMLR 100:910-922

End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds

Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Tom Ouyang, James Guo, Jiquan Ngiam, Vijay Vasudevan ; PMLR 100:923-932

Task-Conditioned Variational Autoencoders for Learning Movement Primitives

Michael Noseworthy, Rohan Paul, Subhro Roy, Daehyung Park, Nicholas Roy ; PMLR 100:933-944

Quasi-Newton Trust Region Policy Optimization

Devesh K. Jha, Arvind U. Raghunathan, Diego Romeres ; PMLR 100:945-954

Learning value functions with relational state representations for guiding task-and-motion planning

Beomjoon Kim, Luke Shimanuki ; PMLR 100:955-968

Locally Weighted Regression Pseudo-Rehearsal for Adaptive Model Predictive Control

Grady R. Williams, Brian Goldfain, Keuntaek Lee, Jason Gibson, James M. Rehg, Evangelos A. Theodorou ; PMLR 100:969-978

Graph-Structured Visual Imitation

Maximilian Sieb, Zhou Xian, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki ; PMLR 100:979-989

Deep Value Model Predictive Control

David Hoeller, Farbod Farshidian, Marco Hutter ; PMLR 100:990-1004

Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning

Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, Nicholas Roy ; PMLR 100:1005-1014

Experience-Embedded Visual Foresight

Lin Yen-Chen, Maria Bauza, Phillip Isola ; PMLR 100:1015-1024

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman ; PMLR 100:1025-1037

Nonverbal Robot Feedback for Human Teachers

Sandy H. Huang, Isabella Huang, Ravi Pandya, Anca D. Dragan ; PMLR 100:1038-1051

Two Stream Networks for Self-Supervised Ego-Motion Estimation

Rares Ambrus, Vitor Guizilini, Jie Li, Sudeep Pillai Adrien Gaidon ; PMLR 100:1052-1061

Model-based Behavioral Cloning with Future Image Similarity Learning

Alan Wu, AJ Piergiovanni, Michael S. Ryoo ; PMLR 100:1062-1077

Worst Cases Policy Gradients

Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov ; PMLR 100:1078-1093

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning

Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine ; PMLR 100:1094-1100

Deep Dynamics Models for Learning Dexterous Manipulation

Anusha Nagabandi, Kurt Konolige, Sergey Levine, Vikash Kumar ; PMLR 100:1101-1112

Learning Latent Plans from Play

Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet ; PMLR 100:1113-1132

Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects

Chaitanya Mitash, Bowen Wen, Kostas Bekris, Abdeslam Boularias ; PMLR 100:1133-1145

Macro-Action-Based Deep Multi-Agent Reinforcement Learning

Yuchen Xiao, Joshua Hoffman, Christopher Amato ; PMLR 100:1146-1161

Active Domain Randomization

Bhairav Mehta, Manfred Diaz, Florian Golemo, Christopher J. Pal, Liam Paull ; PMLR 100:1162-1176

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning

Erdem B\iy\ik, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, Dorsa Sadigh ; PMLR 100:1177-1190

Dynamic Experience Replay

Jieliang Luo, Hui Li ; PMLR 100:1191-1200

Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments

Siddharth Patki, Ethan Fahnestock, Thomas M. Howard, Matthew R. Walter ; PMLR 100:1201-1210

Learning Parametric Constraints in High Dimensions from Demonstrations

Glen Chou, Necmiye Ozay, Dmitry Berenson ; PMLR 100:1211-1230

Variational Optimization Based Reinforcement Learning for Infinite Dimensional Stochastic Systems

Ethan N. Evans, Marcus A. Periera, George I. Boutselis, Evangelos A. Theodorou ; PMLR 100:1231-1246

Understanding Teacher Gaze Patterns for Robot Learning

Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum ; PMLR 100:1247-1258

A Divergence Minimization Perspective on Imitation Learning Methods

Seyed Kamyar Seyed Ghasemipour, Richard Zemel, Shixiang Gu ; PMLR 100:1259-1277

Receding Horizon Curiosity

Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters ; PMLR 100:1278-1288

Learning to Generalize Kinematic Models to Novel Objects

Ben Abbatematteo, Stefanie Tellex, George Konidaris ; PMLR 100:1289-1299

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots

Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar ; PMLR 100:1300-1313

Navigation Agents for the Visually Impaired: A Sidewalk Simulator and Experiments

Martin Weiss, Simon Chamorro, Roger Girgis, Margaux Luck, Samira E. Kahou, Joseph P. Cohen, Derek Nowrouzezahrai, Doina Precup, Florian Golemo, Chris Pal ; PMLR 100:1314-1327

Certified Adversarial Robustness for Deep Reinforcement Learning

Björn Lütjens, Michael Everett, Jonathan P. How ; PMLR 100:1328-1337

Asynchronous Methods for Model-Based Reinforcement Learning

Yunzhi Zhang, Ignasi Clavera, Boren Tsai, Pieter Abbeel ; PMLR 100:1338-1347

PyRoboLearn: A Python Framework for Robot Learning Practitioners

Brian Delhaisse, Leonel Rozo, Darwin G. Caldwell ; PMLR 100:1348-1358

An Online Learning Procedure for Feedback Linearization Control without Torque Measurements

M. Capotondi, G. Turrisi, C. Gaz, V. Modugno, G. Oriolo, A. De Luca ; PMLR 100:1359-1368

The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation

Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox ; PMLR 100:1369-1378

Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods

Ching-An Cheng, Xinyan Yan, Byron Boots ; PMLR 100:1379-1394

Towards Learning to Detect and Predict Contact Events on Vision-based Tactile Sensors

Yazhan Zhang, Weihao Yuan, Zicheng Kan, Michael Yu Wang ; PMLR 100:1395-1404

Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction

Weiming Zhi, Lionel Ott, Fabio Ramos ; PMLR 100:1405-1414

Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated Flight

Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A. Knepper, Yoav Artzi ; PMLR 100:1415-1438

Entity Abstraction in Visual Model-Based Reinforcement Learning

Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua Tenenbaum, Sergey Levine ; PMLR 100:1439-1456

Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations

M. Asif Rana, Anqi Li, Harish Ravichandar, Mustafa Mukadam, Sonia Chernova, Dieter Fox, Byron Boots, Nathan Ratliff ; PMLR 100:1457-1468

subscribe via RSS

This site last compiled Mon, 03 Aug 2020 18:26:29 +0000
Github Account Copyright © PMLR 2020. All rights reserved.