Summary
PRESENT is a proposal for a three-year Research and Innovation project to create virtual digital companions––embodied agents––that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media.
There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and––above all—to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour.
PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. The international partnership includes the Oscar-winning VFX company Framestore; technology developers Brainstorm, Cubic Motion and IKinema; Europe’s largest certification authority InfoCert; research groups from Universitat Pompeu Fabra, Universität Augsburg and Inria; and the pioneers of immersive virtual reality performance CREW.
There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and––above all—to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour.
PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. The international partnership includes the Oscar-winning VFX company Framestore; technology developers Brainstorm, Cubic Motion and IKinema; Europe’s largest certification authority InfoCert; research groups from Universitat Pompeu Fabra, Universität Augsburg and Inria; and the pioneers of immersive virtual reality performance CREW.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/856879 |
Start date: | 01-09-2019 |
End date: | 31-08-2022 |
Total budget - Public funding: | 4 102 070,00 Euro - 4 102 070,00 Euro |
Cordis data
Original description
PRESENT is a proposal for a three-year Research and Innovation project to create virtual digital companions––embodied agents––that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media.There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and––above all—to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour.
PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. The international partnership includes the Oscar-winning VFX company Framestore; technology developers Brainstorm, Cubic Motion and IKinema; Europe’s largest certification authority InfoCert; research groups from Universitat Pompeu Fabra, Universität Augsburg and Inria; and the pioneers of immersive virtual reality performance CREW.
Status
CLOSEDCall topic
ICT-25-2018-2020Update Date
27-10-2022
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
H2020-EU.2.1.1. INDUSTRIAL LEADERSHIP - Leadership in enabling and industrial technologies - Information and Communication Technologies (ICT)