Summary
"Visualizing our surroundings and imagination has been an integral part of human history. In today's era, we have the privilege to immerse in 3D digital environments and interact with virtual objects and characters. However, creating digital representations of environments (i.e., 3D models) often requires excessive amount of manual effort and time even for trained 3D artists. Over the recent years, there have been remarkable advances in deep learning methods that attempt to reconstruct 3D models from real-world data captured in images or scans. However, we are still far from automatically producing 3D models usable in interactive 3D environments and simulations i.e., the resulting reconstructed 3D models lack controllers and metadata related to their articulation structure, possible motions, and interaction with other objects or agents. Automating the synthesis of interactive 3D models is crucial for several applications, such as (a) virtual and mixed reality environments where objects and characters are not static, but instead move and interact with each other, (b) automating animation pipelines, (c) training robots for object interaction in simulated environments, (d) 3D printing of functional objects, (e) digital entertainment. In this project, we will answer the question: ""how can we automate the generation of interactive 3D models of objects and characters?"". Our project will include the following thrusts:
(1) We will design deep architectures that automatically infer motion controllers and interaction-related metadata for input 3D models, effectively making them interactive.
(2) We will develop learning methods that replace dynamic real-world objects and characters captured in scans and video with high-quality, interactive, and animated 3D models as digital representatives.
(3) We will develop generative models that synthesize interactive 3D objects and characters automatically, and further help reconstructing them from scans and video more faithfully."
(1) We will design deep architectures that automatically infer motion controllers and interaction-related metadata for input 3D models, effectively making them interactive.
(2) We will develop learning methods that replace dynamic real-world objects and characters captured in scans and video with high-quality, interactive, and animated 3D models as digital representatives.
(3) We will develop generative models that synthesize interactive 3D objects and characters automatically, and further help reconstructing them from scans and video more faithfully."
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101124742 |
Start date: | 01-10-2024 |
End date: | 30-09-2029 |
Total budget - Public funding: | 2 000 000,00 Euro - 2 000 000,00 Euro |
Cordis data
Original description
"Visualizing our surroundings and imagination has been an integral part of human history. In today's era, we have the privilege to immerse in 3D digital environments and interact with virtual objects and characters. However, creating digital representations of environments (i.e., 3D models) often requires excessive amount of manual effort and time even for trained 3D artists. Over the recent years, there have been remarkable advances in deep learning methods that attempt to reconstruct 3D models from real-world data captured in images or scans. However, we are still far from automatically producing 3D models usable in interactive 3D environments and simulations i.e., the resulting reconstructed 3D models lack controllers and metadata related to their articulation structure, possible motions, and interaction with other objects or agents. Automating the synthesis of interactive 3D models is crucial for several applications, such as (a) virtual and mixed reality environments where objects and characters are not static, but instead move and interact with each other, (b) automating animation pipelines, (c) training robots for object interaction in simulated environments, (d) 3D printing of functional objects, (e) digital entertainment. In this project, we will answer the question: ""how can we automate the generation of interactive 3D models of objects and characters?"". Our project will include the following thrusts:(1) We will design deep architectures that automatically infer motion controllers and interaction-related metadata for input 3D models, effectively making them interactive.
(2) We will develop learning methods that replace dynamic real-world objects and characters captured in scans and video with high-quality, interactive, and animated 3D models as digital representatives.
(3) We will develop generative models that synthesize interactive 3D objects and characters automatically, and further help reconstructing them from scans and video more faithfully."
Status
SIGNEDCall topic
ERC-2023-COGUpdate Date
12-03-2024
Images
No images available.
Geographical location(s)