Summary
Recently, the field of computer vision has witnessed a major transformation away from expert designed shallow models towards more generic deep representation learning. However, collecting labeled data for training deep models is costly and existing simulators with artist-designed scenes do not provide the required variety and fidelity. Project LEGO-3D will tackle this problem by developing probabilistic models capable of synthesizing 3D scenes jointly with photo-realistic 2D projections from arbitrary viewpoints and with full control over the scene elements. Our key insight is that data augmentation, while hard in 2D, becomes considerably easier in 3D as physical properties such as viewpoint invariances and occlusion relationships are captured by construction. Thus, our goal is to learn the entire 3D-to-2D simulation pipeline. In particular, we will focus on the following problems:
(A) We will devise algorithms for automatic decomposition of real and synthetic scenes into latent 3D primitive representations capturing geometry, material, light and motion.
(B) We will develop novel probabilistic generative models which are able to synthesize large-scale 3D environments based on the primitives extracted in project (A). In particular, we will develop unconditional, conditioned and spatio-temporal scene generation networks.
(C) We will combine differentiable and neural rendering techniques with deep learning based image synthesis, yielding high-fidelity 2D renderings of the 3D representations generated in project (B) while capturing ambiguities and uncertainties.
Project LEGO-3D will significantly impact a large number of application areas. Examples include vision systems which require access to large amounts of annotated data, safety-critical applications such as autonomous cars that rely on efficient ways for training and validation, as well as the entertainment industry which seeks to automate the creation and manipulation of 3D content.
(A) We will devise algorithms for automatic decomposition of real and synthetic scenes into latent 3D primitive representations capturing geometry, material, light and motion.
(B) We will develop novel probabilistic generative models which are able to synthesize large-scale 3D environments based on the primitives extracted in project (A). In particular, we will develop unconditional, conditioned and spatio-temporal scene generation networks.
(C) We will combine differentiable and neural rendering techniques with deep learning based image synthesis, yielding high-fidelity 2D renderings of the 3D representations generated in project (B) while capturing ambiguities and uncertainties.
Project LEGO-3D will significantly impact a large number of application areas. Examples include vision systems which require access to large amounts of annotated data, safety-critical applications such as autonomous cars that rely on efficient ways for training and validation, as well as the entertainment industry which seeks to automate the creation and manipulation of 3D content.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/850533 |
Start date: | 01-10-2020 |
End date: | 31-03-2026 |
Total budget - Public funding: | 1 467 500,00 Euro - 1 467 500,00 Euro |
Cordis data
Original description
Recently, the field of computer vision has witnessed a major transformation away from expert designed shallow models towards more generic deep representation learning. However, collecting labeled data for training deep models is costly and existing simulators with artist-designed scenes do not provide the required variety and fidelity. Project LEGO-3D will tackle this problem by developing probabilistic models capable of synthesizing 3D scenes jointly with photo-realistic 2D projections from arbitrary viewpoints and with full control over the scene elements. Our key insight is that data augmentation, while hard in 2D, becomes considerably easier in 3D as physical properties such as viewpoint invariances and occlusion relationships are captured by construction. Thus, our goal is to learn the entire 3D-to-2D simulation pipeline. In particular, we will focus on the following problems:(A) We will devise algorithms for automatic decomposition of real and synthetic scenes into latent 3D primitive representations capturing geometry, material, light and motion.
(B) We will develop novel probabilistic generative models which are able to synthesize large-scale 3D environments based on the primitives extracted in project (A). In particular, we will develop unconditional, conditioned and spatio-temporal scene generation networks.
(C) We will combine differentiable and neural rendering techniques with deep learning based image synthesis, yielding high-fidelity 2D renderings of the 3D representations generated in project (B) while capturing ambiguities and uncertainties.
Project LEGO-3D will significantly impact a large number of application areas. Examples include vision systems which require access to large amounts of annotated data, safety-critical applications such as autonomous cars that rely on efficient ways for training and validation, as well as the entertainment industry which seeks to automate the creation and manipulation of 3D content.
Status
SIGNEDCall topic
ERC-2019-STGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)