Summary
Multimedia content is indispensable in our society, necessitating effective content management. A critical aspect of this is assessing the similarity between two multimedia items like images, videos, and documents. LUSt's mission is to pioneer a universal similarity function capable of precisely measuring similarity across a broad spectrum of multimedia domains and tasks. Diverging from traditional problem-specific approaches prevalent in current literature, LUSt adopts a novel strategy. LUSt plans to break down multimedia items into their constituent parts, including image regions, video frames, and text sentences. Subsequently, a foundational model will be trained on input data comprising part similarities across various multimedia items. This strategic choice yields a universal input space with multiple advantages. Firstly, it promotes seamless collaboration across different domains and tasks, facilitating joint training and mutual enhancement among tasks, which will be further enriched through multi-task learning techniques. Secondly, it streamlines the integration of synthetic data during training, a key ingredient for large-scale training of a foundational model. The model architecture is grounded in transformer-based deep learning modules and will be fortified by pioneering positional encodings rooted in kernel methods. These positional encodings empower us to effectively manage the differing part topologies encountered across diverse domains -- a formidable challenge in itself. The work program commences by focusing on a single domain and task but is thoughtfully designed for extensibility. The ultimate goal is creating a foundational model capable of accommodating all modalities -- visual, audio, text -- and supporting a broad range of similarity types, including uni-modal, cross-modal, and multi-modal scenarios. LUSt's commitment to universality will be thoroughly validated through comprehensive benchmarking, spanning numerous tasks and domains.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101154126 |
Start date: | 01-01-2025 |
End date: | 31-12-2026 |
Total budget - Public funding: | - 150 438,00 Euro |
Cordis data
Original description
Multimedia content is indispensable in our society, necessitating effective content management. A critical aspect of this is assessing the similarity between two multimedia items like images, videos, and documents. LUSt's mission is to pioneer a universal similarity function capable of precisely measuring similarity across a broad spectrum of multimedia domains and tasks. Diverging from traditional problem-specific approaches prevalent in current literature, LUSt adopts a novel strategy. LUSt plans to break down multimedia items into their constituent parts, including image regions, video frames, and text sentences. Subsequently, a foundational model will be trained on input data comprising part similarities across various multimedia items. This strategic choice yields a universal input space with multiple advantages. Firstly, it promotes seamless collaboration across different domains and tasks, facilitating joint training and mutual enhancement among tasks, which will be further enriched through multi-task learning techniques. Secondly, it streamlines the integration of synthetic data during training, a key ingredient for large-scale training of a foundational model. The model architecture is grounded in transformer-based deep learning modules and will be fortified by pioneering positional encodings rooted in kernel methods. These positional encodings empower us to effectively manage the differing part topologies encountered across diverse domains -- a formidable challenge in itself. The work program commences by focusing on a single domain and task but is thoughtfully designed for extensibility. The ultimate goal is creating a foundational model capable of accommodating all modalities -- visual, audio, text -- and supporting a broad range of similarity types, including uni-modal, cross-modal, and multi-modal scenarios. LUSt's commitment to universality will be thoroughly validated through comprehensive benchmarking, spanning numerous tasks and domains.Status
SIGNEDCall topic
HORIZON-MSCA-2023-PF-01-01Update Date
20-11-2024
Images
No images available.
Geographical location(s)