Multi-task learning with attention for end-to-end autonomous driving

K Ishihara, A Kanervisto, J Miura… - Proceedings of the …, 2021 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2021openaccess.thecvf.com
Autonomous driving systems need to handle complex scenarios such as lane following,
avoiding collisions, taking turns, and responding to traffic signals. In recent years,
approaches based on end-to-end behavioral cloning have demonstrated remarkable
performance in point-to-point navigational scenarios, using a realistic simulator and
standard benchmarks. Offline imitation learning is readily available, as it does not require
expensive hand annotation or interaction with the target environment, but it is difficult to …
Abstract
Autonomous driving systems need to handle complex scenarios such as lane following, avoiding collisions, taking turns, and responding to traffic signals. In recent years, approaches based on end-to-end behavioral cloning have demonstrated remarkable performance in point-to-point navigational scenarios, using a realistic simulator and standard benchmarks. Offline imitation learning is readily available, as it does not require expensive hand annotation or interaction with the target environment, but it is difficult to obtain a reliable system. In addition, existing methods have not specifically addressed the learning of reaction for traffic lights, which are a rare occurrence in the training datasets. Inspired by the previous work on multi-task learning and attention modeling, we propose a novel multi-task attention-aware network in the conditional imitation learning (CIL) framework. This does not only improve the success rate of standard benchmarks, but also the ability to react to traffic lights, which we show with standard benchmarks.
openaccess.thecvf.com
この検索の最上位の結果を表示しています。 検索結果をすべて見る