GraViLa | Graphs without Labels: Multimodal Structure Learning without Human Supervision

Summary
Multimodal learning focuses on training models with data in more than one modality, such as videos capturing visual and audio information or documents containing image and text. Current approaches use such data to train large-scale deep learning models without human supervision by sampling pair-wise data e.g., an image-text pair from a website and train the network e.g. to identify matching vs. not matching pairs to learn better representations.
We argue that multimodal learning can do more: by combining information from different sources, multimodal models capture cross-modal semantic entities, and as most multimodal documents are a collection of connected modalities and topics, multimodal models should allow us to capture the inherent high-level topology of such data. The goal of the following project is to learn semantic structures from multimodal data to capture long-range concepts and relations in multimodal data via multimodal and self-supervision learning without human annotation. We will represent this information in form of a graph, considering latent semantic concepts as nodes and their connectivity as edges. Based on this structure, we will extend current unimodal approaches to capture and process data from different modalities in a single structure. Finally, we will explore the challenges and opportunities of the proposed idea with respect to their impact on two main challenges in machine learning: data-efficient learning and fairness in label-free learning.
By bridging the gap between those two parallel trends, multimodal supervision and graph-based representations, we combine their strengths of generating and processing topological data, which will not only allow to build new applications and tools but also opens new ways of processing and understanding large-scale data that are out-of-reach at the moment.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101117556
Start date: 01-04-2024
End date: 31-03-2029
Total budget - Public funding: 1 499 438,00 Euro - 1 499 438,00 Euro
Cordis data

Original description

Multimodal learning focuses on training models with data in more than one modality, such as videos capturing visual and audio information or documents containing image and text. Current approaches use such data to train large-scale deep learning models without human supervision by sampling pair-wise data e.g., an image-text pair from a website and train the network e.g. to identify matching vs. not matching pairs to learn better representations.
We argue that multimodal learning can do more: by combining information from different sources, multimodal models capture cross-modal semantic entities, and as most multimodal documents are a collection of connected modalities and topics, multimodal models should allow us to capture the inherent high-level topology of such data. The goal of the following project is to learn semantic structures from multimodal data to capture long-range concepts and relations in multimodal data via multimodal and self-supervision learning without human annotation. We will represent this information in form of a graph, considering latent semantic concepts as nodes and their connectivity as edges. Based on this structure, we will extend current unimodal approaches to capture and process data from different modalities in a single structure. Finally, we will explore the challenges and opportunities of the proposed idea with respect to their impact on two main challenges in machine learning: data-efficient learning and fairness in label-free learning.
By bridging the gap between those two parallel trends, multimodal supervision and graph-based representations, we combine their strengths of generating and processing topological data, which will not only allow to build new applications and tools but also opens new ways of processing and understanding large-scale data that are out-of-reach at the moment.

Status

SIGNED

Call topic

ERC-2023-STG

Update Date

22-11-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2023-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2023-STG ERC STARTING GRANTS