NH-DEHAZE | Dataset and dehazing methods for non-homogeneous and dense hazy scenes

Summary
In presence of haze, small floating particles absorb and scatter the light from its propagation direction. This results in selective and significant attenuation of the light spectrum, and causes hazy scenes to be subject to a loss of contrast and sharpness for distant objects. Besides, most computer vision and image processing algorithms (e.g., from feature extraction to objects/scene detection and recognition) usually assume that the input image is the scene radiance (haze-free image), and therefore strongly suffer from the color-shift, and low-contrast induced by hazy conditions. For instance, in normal visibility conditions the Traffic Sign Detection and Recognition (TSDR) module of the existing ADAS systems reaches a detection rate averaging around 90%, but drops below 40% in case of haze or poor illumination conditions1. Therefore, many recent works have explored inverse problem formulations and have designed dedicated image enhancement methods to address the dehazing problem. However, to estimate their key internal parameters (e.g. airlight in Koschmieder’s light transmission model), most of those solutions assume homogeneous distribution of light and haze, which is rarely the case in practice (e.g. lighting is non-uniform in space and frequency during the night, attenuation caused by haze depends on the light frequency).

Image dehazing thus remains a largely unsolved problem in case of dense and non-homogeneous haze scenes.

As a federating objective, our project aims at implementing dehazing methods that are suited to dense and non-homogeneous hazy scenes. This implies the following tasks:
(O1) build up the first (world-wide) image dataset including pairs of hazy and haze-free scenes, for which hazy scenes include real, dense, and non-homogeneous haze;
(O2) develop and train deep dehazing neural networks to derive the dehazed images from hazy inputs.
(O3) train deep image interpretation models that are suited to images captured in adverse conditions.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/890254
Start date: 01-10-2020
End date: 30-05-2025
Total budget - Public funding: 178 320,00 Euro - 178 320,00 Euro
Cordis data

Original description

In presence of haze, small floating particles absorb and scatter the light from its propagation direction. This results in selective and significant attenuation of the light spectrum, and causes hazy scenes to be subject to a loss of contrast and sharpness for distant objects. Besides, most computer vision and image processing algorithms (e.g., from feature extraction to objects/scene detection and recognition) usually assume that the input image is the scene radiance (haze-free image), and therefore strongly suffer from the color-shift, and low-contrast induced by hazy conditions. For instance, in normal visibility conditions the Traffic Sign Detection and Recognition (TSDR) module of the existing ADAS systems reaches a detection rate averaging around 90%, but drops below 40% in case of haze or poor illumination conditions1. Therefore, many recent works have explored inverse problem formulations and have designed dedicated image enhancement methods to address the dehazing problem. However, to estimate their key internal parameters (e.g. airlight in Koschmieder’s light transmission model), most of those solutions assume homogeneous distribution of light and haze, which is rarely the case in practice (e.g. lighting is non-uniform in space and frequency during the night, attenuation caused by haze depends on the light frequency).

Image dehazing thus remains a largely unsolved problem in case of dense and non-homogeneous haze scenes.

As a federating objective, our project aims at implementing dehazing methods that are suited to dense and non-homogeneous hazy scenes. This implies the following tasks:
(O1) build up the first (world-wide) image dataset including pairs of hazy and haze-free scenes, for which hazy scenes include real, dense, and non-homogeneous haze;
(O2) develop and train deep dehazing neural networks to derive the dehazed images from hazy inputs.
(O3) train deep image interpretation models that are suited to images captured in adverse conditions.

Status

SIGNED

Call topic

MSCA-IF-2019

Update Date

28-04-2024
Images
No images available.
Geographical location(s)