Volume 87: Conference on Robot Learning, 29-31 October 2018,

[edit]

Editors: Aude Billard, Anca Dragan, Jan Peters, Jun Morimoto

[bib][citeproc]

Driving Policy Transfer via Modularity and Abstraction

Matthias Mueller, Alexey Dosovitskiy, Bernard Ghanem, Vladlen Koltun ; PMLR 87:1-15

Personalized Dynamics Models for Adaptive Assistive Navigation Systems

Eshed OhnBar, Kris Kitani, Chieko Asakawa ; PMLR 87:16-39

Few-Shot Goal Inference for Visuomotor Learning and Planning

Annie Xie, Avi Singh, Sergey Levine, Chelsea Finn ; PMLR 87:40-52

Neural Modular Control for Embodied Question Answering

Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra ; PMLR 87:53-62

Visual Curiosity: Learning to Ask Questions to Learn Visual Recognition

Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, Devi Parikh ; PMLR 87:63-80

Guided Feature Transformation (GFT): A Neural Language Grounding Module for Embodied Agents

Haonan Yu, Xiaochen Lian, Haichao Zhang, Wei Xu ; PMLR 87:81-98

Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Eric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine ; PMLR 87:99-112

Energy-Based Hindsight Experience Prioritization

Rui Zhao, Volker Tresp ; PMLR 87:113-122

Including Uncertainty when Learning from Human Corrections

Dylan P. Losey, Marcia K. O’Malley ; PMLR 87:123-132

Deep Drone Racing: Learning Agile Flight in Dynamic Environments

Elia Kaufmann, Antonio Loquercio, Rene Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza ; PMLR 87:133-145

HDNET: Exploiting HD Maps for 3D Object Detection

Bin Yang, Ming Liang, Raquel Urtasun ; PMLR 87:146-155

Motion Perception in Reinforcement Learning with Dynamic Objects

Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, Thomas Brox ; PMLR 87:156-168

Particle Filter Networks with Application to Visual Localization

Peter Karkus, David Hsu, Wee Sun Lee ; PMLR 87:169-178

Sparse Gaussian Process Temporal Difference Learning for Marine Robot Navigation

John Martin, Jinkun Wang, Brendan Englot ; PMLR 87:179-189

Fast 3D Modeling with Approximated Convolutional Kernels

Vitor Guizilini, Fabio Ramos ; PMLR 87:190-199

Unpaired Learning of Dense Visual Depth Estimators for Urban Environments

Vitor Guizilini, Fabio Ramos ; PMLR 87:200-212

Learning over Subgoals for Efficient Navigation of Structured, Unknown Environments

Gregory J. Stein, Christopher Bradley, Nicholas Roy ; PMLR 87:213-222

Inferring geometric constraints in human demonstrations

Guru Subramani, Michael Zinn, Michael Gleicher ; PMLR 87:223-236

Conditional Affordance Learning for Driving in Urban Environments

Axel Sauer, Nikolay Savinov, Andreas Geiger ; PMLR 87:237-252

Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANs

Patrick Wenzel, Qadeer Khan, Daniel Cremers, Laura Leal-Taixe ; PMLR 87:253-269

GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning

Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, Dieter Fox ; PMLR 87:270-282

Feature Learning for Scene Flow Estimation from LIDAR

Arash K. Ushani, Ryan M. Eustice ; PMLR 87:283-292

PAC-Bayes Control: Synthesizing Controllers that Provably Generalize to Novel Environments

Anirudha Majumdar, Maxwell Goldstein ; PMLR 87:293-305

Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield ; PMLR 87:306-316

SPNets: Differentiable Fluid Dynamics for Deep Neural Networks

Connor Schenck, Dieter Fox ; PMLR 87:317-335

A Data-Efficient Approach to Precise and Controlled Pushing

Maria Bauza, Francois R. Hogan, Alberto Rodriguez ; PMLR 87:336-345

Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

Jake Bruce, Niko Sunderhauf, Piotr Mirowski, Raia Hadsell, Michael Milford ; PMLR 87:346-361

Risk-Aware Active Inverse Reinforcement Learning

Daniel S. Brown, Yuchen Cui, Scott Niekum ; PMLR 87:362-372

Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation

Peter R. Florence, Lucas Manuelli, Russ Tedrake ; PMLR 87:373-385

Bayesian RL for Goal-Only Rewards

Philippe Morere, Fabio Ramos ; PMLR 87:386-398

Benchmarks for reinforcement learning in mixed-autonomy traffic

Eugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Cathy Wu, Fangyu Wu, Richard Liaw, Eric Liang, Alexandre M. Bayen ; PMLR 87:399-409

Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in Navigation

Fan Wang, Bo Zhou, Ke Chen, Tingxiang Fan, Xi Zhang, Jiangyong Li, Hao Tian, Jia Pan ; PMLR 87:410-421

Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions

Ricson Cheng, Arpit Agarwal, Katerina Fragkiadaki ; PMLR 87:422-431

Adaptable replanning with compressed linear action models for learning from demonstrations

Clement Gehring, Leslie Pack Kaelbling, Tomas Lozano-Perez ; PMLR 87:432-442

Automorphing Kernels for Nonstationarity in Mapping Unstructured Environments

Ransalu Senanayake, Anthony Tompkins, Fabio Ramos ; PMLR 87:443-455

Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization

Paul-Edouard Sarlin, Frederic Debraine, Marcin Dymczyk, Roland Siegwart ; PMLR 87:456-465

The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems

Spencer M. Richards, Felix Berkenkamp, Andreas Krause ; PMLR 87:466-476

Learning 6-DoF Grasping and Pick-Place Using Attention Focus

Marcus Gualtieri, Robert Platt ; PMLR 87:477-486

Curiosity Driven Exploration of Learned Disentangled Goal Spaces

Adrien Laversanne-Finot, Alexandre Pere, Pierre-Yves Oudeyer ; PMLR 87:487-504

Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

Valts Blukis, Dipendra Misra, Ross A. Knepper, Yoav Artzi ; PMLR 87:505-518

Batch Active Preference-Based Learning of Reward Functions

Erdem Biyik, Dorsa Sadigh ; PMLR 87:519-528

Learning Audio Feedback for Estimating Amount and Flow of Granular Material

Samuel Clarke, Travers Rhodes, Christopher G. Atkeson, Oliver Kroemer ; PMLR 87:529-550

HybridNet: Integrating Model-based and Data-driven Learning to Predict Evolution of Dynamical Systems

Yun Long, Xueyuan She, Saibal Mukhopadhyay ; PMLR 87:551-560

Benchmarking Reinforcement Learning Algorithms on Real-World Robots

A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, James Bergstra ; PMLR 87:561-591

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

Tanmay Shankar, Nicholas Rhinehart, Katharina Muelling, Kris M. Kitani ; PMLR 87:592-604

Learning to Localize Using a LiDAR Intensity Map

Ioan Andrei Barsan, Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun ; PMLR 87:605-616

Model-Based Reinforcement Learning via Meta-Policy Optimization

Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, Pieter Abbeel ; PMLR 87:617-629

Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving Targets

Guilherme Maeda, Okan Koc, Jun Morimoto ; PMLR 87:630-640

Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation

Rika Antonova, Mia Kokic, Johannes A. Stork, Danica Kragic ; PMLR 87:641-650

Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine ; PMLR 87:651-673

Reward Estimation for Variance Reduction in Deep Reinforcement Learning

Joshua Romoff, Peter Henderson, Alexandre Piche, Vincent Francois-Lavet, Joelle Pineau ; PMLR 87:674-699

Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment

Fabio Muratore, Felix Treede, Michael Gienger, Jan Peters ; PMLR 87:700-713

Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge

Daniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomarlan, Michael Beetz, Nicholas Roy ; PMLR 87:714-723

Learning What Information to Give in Partially Observed Domains

Rohan Chitnis, Leslie Pack Kaelbling, Tomas Lozano-Perez ; PMLR 87:724-733

Sim-to-Real Reinforcement Learning for Deformable Object Manipulation

Jan Matas, Stephen James, Andrew J. Davison ; PMLR 87:734-743

Expanding Motor Skills using Relay Networks

Visak CV Kumar, Sehoon Ha, C.Karen Liu ; PMLR 87:744-756

Efficient Hierarchical Robot Motion Planning Under Uncertainty and Hybrid Dynamics

Ajinkya Jain, Scott Niekum ; PMLR 87:757-766

SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark

Linxi Fan, Yuke Zhu, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa, Silvio Savarese, Li Fei-Fei ; PMLR 87:767-782

Task-Embedded Control Networks for Few-Shot Imitation Learning

Stephen James, Michael Bloesch, Andrew J. Davison ; PMLR 87:783-795

Learning under Misspecified Objective Spaces

Andreea Bobu, Andrea Bajcsy, Jaime F. Fisac, Anca D. Dragan ; PMLR 87:796-805

Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation

Gregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine ; PMLR 87:806-816

Sim-to-Real Transfer with Neural-Augmented Robot Simulation

Florian Golemo, Adrien Ali Taiga, Aaron Courville, Pierre-Yves Oudeyer ; PMLR 87:817-828

Bayesian Generalized Kernel Inference for Terrain Traversability Mapping

Tixiao Shan, Jinkun Wang, Brendan Englot, Kevin Doherty ; PMLR 87:829-838

Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse Rewards

Rituraj Kaushik, Konstantinos Chatzilygeroudis, Jean-Baptiste Mouret ; PMLR 87:839-855

Modular meta-learning

Ferran Alet, Tomas Lozano-Perez, Leslie P. Kaelbling ; PMLR 87:856-868

Dyadic collaborative Manipulation through Hybrid Trajectory Optimization

Theodoros Stouraitis, Iordanis Chatzinikolaidis, Michael Gienger, Sethu Vijayakumar ; PMLR 87:869-878

ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation

Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei ; PMLR 87:879-893

Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories

Yanfu Zhang, Wenshan Wang, Rogerio Bonatti, Daniel Maturana, Sebastian Scherer ; PMLR 87:894-905

Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation

Pratyusha Sharma, Lekha Mohan, Lerrel Pinto, Abhinav Gupta ; PMLR 87:906-915

Policies Modulating Trajectory Generators

Atil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke ; PMLR 87:916-926

A Physically-Consistent Bayesian Non-Parametric Mixture Model for Dynamical System Learning

Nadia Figueroa, Aude Billard ; PMLR 87:927-946

IntentNet: Learning to Predict Intention from Raw Sensor Data

Sergio Casas, Wenjie Luo, Raquel Urtasun ; PMLR 87:947-956

Interpretable Latent Spaces for Learning from Demonstration

Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy ; PMLR 87:957-968

ESIM: an Open Event Camera Simulator

Henri Rebecq, Daniel Gehrig, Davide Scaramuzza ; PMLR 87:969-982

Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning

Frederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn ; PMLR 87:983-993

subscribe via RSS

This site last compiled Thu, 08 Nov 2018 12:27:29 +0000
Github Account Copyright © PMLR 2018. All rights reserved.