Multimodal Deep Learning for Advanced Driving Systems

Abstract

Multimodal deep learning is about learning features over multiple modalities. Impressive progress has been made in deep learning solutions that rely on a single sensor modality for advanced driving. However, these approaches are limited to cover certain functionalities. The potential of multimodal sensor fusion has been very little exploited, although research vehicles are commonly provided with various sensor types. How to combine their data to achieve a complex scene analysis and improve therefore robustness in driving is still an open question. While different surveys have been done for intelligent vehicles or deep learning, to date no survey on multimodal deep learning for advanced driving exists. This paper attempts to narrow this gap by providing the first review that analyzes existing literature and two indispensable elements: sensors and datasets. We also provide our insights on future challenges and work to be done.

Publication
Articulated Motion and Deformable Objects
Nerea Aranjuelo Ansa
Nerea Aranjuelo Ansa
Former PhD student

My research focuses on machine learning and computer vision for multimodal perception systems.

Ignacio Arganda-Carreras
Ignacio Arganda-Carreras
Ikerbasque Research Associate Professor

My research interests include image processing, computer vision, and deep learning for biomedical applications.