VUAD | Video Understanding for Autonomous Driving

Summary
Autonomous vision aims to solve computer vision problems related to autonomous driving. Autonomous vision algorithms achieve impressive results on a single image for various tasks such as object detection and semantic segmentation, however, this success has not been fully extended to video sequences yet. In computer vision, it is commonly acknowledged that video understanding falls years behind single image. This is mainly due to two reasons: processing power required for reasoning across multiple frames and the difficulty of obtaining ground truth for every frame in a sequence, especially for pixel-level tasks such as motion estimation. Based on these observations, there are two likely directions to boost the performance of tasks related to video understanding in autonomous vision: unsupervised learning and object-level reasoning as opposed to pixel-level reasoning. Following these directions, we propose to tackle three relevant problems in video understanding. First, we propose a deep learning method for multi-object tracking on graph structured data. Second, we extend it to joint video object detection and tracking by exploiting temporal cues in order to improve both detection and tracking performance. Third, we propose to learn a background motion model for the static parts of the scene in an unsupervised manner. Our long-term goal is also to be able to learn detection and tracking in an unsupervised manner. Once we achieve these stepping stones, we plan to combine the proposed algorithms into a unified video understanding module and test its performance in comparison to static counterparts as well as the state-of-the-art algorithms in video understanding.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/898466
Start date: 01-04-2020
End date: 31-03-2022
Total budget - Public funding: 145 355,52 Euro - 145 355,00 Euro
Cordis data

Original description

Autonomous vision aims to solve computer vision problems related to autonomous driving. Autonomous vision algorithms achieve impressive results on a single image for various tasks such as object detection and semantic segmentation, however, this success has not been fully extended to video sequences yet. In computer vision, it is commonly acknowledged that video understanding falls years behind single image. This is mainly due to two reasons: processing power required for reasoning across multiple frames and the difficulty of obtaining ground truth for every frame in a sequence, especially for pixel-level tasks such as motion estimation. Based on these observations, there are two likely directions to boost the performance of tasks related to video understanding in autonomous vision: unsupervised learning and object-level reasoning as opposed to pixel-level reasoning. Following these directions, we propose to tackle three relevant problems in video understanding. First, we propose a deep learning method for multi-object tracking on graph structured data. Second, we extend it to joint video object detection and tracking by exploiting temporal cues in order to improve both detection and tracking performance. Third, we propose to learn a background motion model for the static parts of the scene in an unsupervised manner. Our long-term goal is also to be able to learn detection and tracking in an unsupervised manner. Once we achieve these stepping stones, we plan to combine the proposed algorithms into a unified video understanding module and test its performance in comparison to static counterparts as well as the state-of-the-art algorithms in video understanding.

Status

CLOSED

Call topic

MSCA-IF-2019

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2019
MSCA-IF-2019