ULTRACEPT | Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance

Summary
Autonomous vehicles, although in its early stage, have demonstrated huge potential in shaping future life styles to many of us. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – this is trustworthy collision detection. No one likes an autonomous car that is doomed to a collision accident once every few years or months. In the real world, collision does happen at every second - more than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser based Lidar and GPS are far from acceptable in terms of reliability, cost, energy consumption and size. For example, radar is too sensitive to metallic material, Lidar is too expensive and it does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with high buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and to make autonomous vehicles safer to serve human society, a new type of trustworthy, robust, low cost, and low energy consumption vehicle collision detection and avoidance systems are badly needed.

This consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. It takes the advantages of low cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/778062
Start date: 01-12-2018
End date: 30-09-2024
Total budget - Public funding: 2 191 500,00 Euro - 1 894 500,00 Euro
Cordis data

Original description

Autonomous vehicles, although in its early stage, have demonstrated huge potential in shaping future life styles to many of us. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – this is trustworthy collision detection. No one likes an autonomous car that is doomed to a collision accident once every few years or months. In the real world, collision does happen at every second - more than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser based Lidar and GPS are far from acceptable in terms of reliability, cost, energy consumption and size. For example, radar is too sensitive to metallic material, Lidar is too expensive and it does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with high buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and to make autonomous vehicles safer to serve human society, a new type of trustworthy, robust, low cost, and low energy consumption vehicle collision detection and avoidance systems are badly needed.

This consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. It takes the advantages of low cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.

Status

SIGNED

Call topic

MSCA-RISE-2017

Update Date

28-04-2024
Images
No images available.
Geographical location(s)