Zephyrnet Logo

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning. (arXiv:2004.08051v1 [cs.RO])

Date:

[Submitted on 17 Apr 2020]

Download PDF

Abstract: In this work, we present a method for obtaining an implicit objective
function for vision-based navigation. The proposed methodology relies on
Imitation Learning, Model Predictive Control (MPC), and Deep Learning. We use
Imitation Learning as a means to do Inverse Reinforcement Learning in order to
create an approximate costmap generator for a visual navigation challenge. The
resulting costmap is used in conjunction with a Model Predictive Controller for
real-time control and outperforms other state-of-the-art costmap generators
combined with MPC in novel environments. The proposed process allows for simple
training and robustness to out-of-sample data. We apply our method to the task
of vision-based autonomous driving in multiple real and simulated environments
using the same weights for the costmap predictor in all environments.

Submission history

From: Keuntaek Lee [view email]
[v1]
Fri, 17 Apr 2020 03:36:50 UTC (6,822 KB)

Source: http://arxiv.org/abs/2004.08051

spot_img

Latest Intelligence

spot_img