LUMINOUS | Language Augmentation for Humanverse

Summary
LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural language-based communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined user needs and unseen environments. This will enable future XR users to interact fluently with their environment, while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher) in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that were not part of their training data and then describing them in a way that they become visually recognizable. Our results will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation, Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising current technological developments (LLM/AI and XR), made in Europe.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101135724
Start date: 01-01-2024
End date: 31-12-2026
Total budget - Public funding: 5 603 706,25 Euro - 5 603 706,00 Euro
Cordis data

Original description

LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural language-based communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined user needs and unseen environments. This will enable future XR users to interact fluently with their environment, while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher) in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that were not part of their training data and then describing them in a way that they become visually recognizable. Our results will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation, Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising current technological developments (LLM/AI and XR), made in Europe.

Status

SIGNED

Call topic

HORIZON-CL4-2023-HUMAN-01-21

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.0 Cross-cutting call topics
HORIZON-CL4-2023-HUMAN-01-CNECT
HORIZON-CL4-2023-HUMAN-01-21 Next Generation eXtended Reality (RIA)
HORIZON.2.4.3 Emerging enabling technologies
HORIZON-CL4-2023-HUMAN-01-CNECT
HORIZON-CL4-2023-HUMAN-01-21 Next Generation eXtended Reality (RIA)