NERPHYS | Empowering Neural Rendering Methods with Physically-Based Capabilities

Summary
While long restricted to an elite of expert digital artists, 3D content creation has recently been greatly simplified by deep learning. Neural representations of 3D objects have revolutionized real-world capture from photos, while generative models are starting to enable 3D object synthesis from text prompts. These methods use differentiable neural rendering that allows efficient optimization of the powerful and expressive ``soft'' neural representations, but ignores physically-based principles, and thus has no guarantees on accuracy, severely limiting the utility of the resulting content.

Differentiable physically-based rendering on the other hand can produce 3D assets with physics-based parameters, but depends on rigid traditional ``hard'' graphics representations required for light-transport computation, that make optimization much harder and is also costly, limiting applicability.

In NERPHYS we will combine the strengths of both neural and physically-based rendering, lifting their respective limitations by introducing polymorphic 3D representations, i.e., capable of morphing between different states to accommodate both efficient gradient-based optimization and physically-based light transport. By augmenting these representations with corresponding polymorphic differentiable renderers, our methodology will unleash the potential of neural rendering to produce physically-based 3D assets with guarantees on accuracy.

NERPHYS will have ground-breaking impact on 3D content creation, moving beyond today's simplistic plausible imagery, to full physically-based rendering with guarantees on error, enabling the use of powerful neural rendering methods in any application requiring accuracy. Our polymorphic approach will fundamentally change how we reason about scene representations for geometry and appearance,
while our rendering algorithms will provide a new methodology for image synthesis, e.g., for training data generation or visual effects.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101141721
Start date: 01-12-2024
End date: 30-11-2029
Total budget - Public funding: 2 488 029,00 Euro - 2 488 029,00 Euro
Cordis data

Original description

While long restricted to an elite of expert digital artists, 3D content creation has recently been greatly simplified by deep learning. Neural representations of 3D objects have revolutionized real-world capture from photos, while generative models are starting to enable 3D object synthesis from text prompts. These methods use differentiable neural rendering that allows efficient optimization of the powerful and expressive ``soft'' neural representations, but ignores physically-based principles, and thus has no guarantees on accuracy, severely limiting the utility of the resulting content.

Differentiable physically-based rendering on the other hand can produce 3D assets with physics-based parameters, but depends on rigid traditional ``hard'' graphics representations required for light-transport computation, that make optimization much harder and is also costly, limiting applicability.

In NERPHYS we will combine the strengths of both neural and physically-based rendering, lifting their respective limitations by introducing polymorphic 3D representations, i.e., capable of morphing between different states to accommodate both efficient gradient-based optimization and physically-based light transport. By augmenting these representations with corresponding polymorphic differentiable renderers, our methodology will unleash the potential of neural rendering to produce physically-based 3D assets with guarantees on accuracy.

NERPHYS will have ground-breaking impact on 3D content creation, moving beyond today's simplistic plausible imagery, to full physically-based rendering with guarantees on error, enabling the use of powerful neural rendering methods in any application requiring accuracy. Our polymorphic approach will fundamentally change how we reason about scene representations for geometry and appearance,
while our rendering algorithms will provide a new methodology for image synthesis, e.g., for training data generation or visual effects.

Status

SIGNED

Call topic

ERC-2023-ADG

Update Date

29-09-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.1 Frontier science
ERC-2023-ADG ERC ADVANCED GRANTS