FOUNDATIONS | A Foundation for Empirical Multimodality Research

Summary
This project lays a foundation for conducting data-driven empirical research on multimodality, that is, how human communication and interaction rely on combinations of 'modes' of expression. Theories of multimodality are rapidly gaining currency in diverse fields concerned with human communication, interaction and cultural production. However, most theories of multimodality are based on conjecture and remain without an empirical foundation due to the lack of large, richly-annotated multimodal corpora and methods for their analysis.

FOUNDATIONS solves this problem by developing a novel methodology for conducting empirical research on multimodality in everyday cultural artefacts, such as newspapers, textbooks, magazines and social media videos. This methodology allows critically examining key theoretical concepts in the field – medium, semiotic mode and genre – and their joint contribution to meaning-making, which renews our understanding of key theoretical concepts in multimodality research and places their definitions on a solid empirical foundation.

To do so, FOUNDATIONS creates large and reproducible multimodal corpora using microtask crowdsourcing, which breaks complex tasks into piecemeal work and distributes this effort to non-expert workers on online platforms. These crowdsourced descriptions are combined with computational representations into graphs that represent the structure of multimodal discourse. For analysing these corpora, the project develops novel methods based on neuro-symbolic artificial intelligence, which enables combining crowdsourced human insights with the pattern recognition capability of neural networks.

The groundbreaking theoretical and methodological advances in FOUNDATIONS go far beyond state of the art by enabling large-scale empirical research while preserving analytical depth needed for multimodality research. This opens up new domains of inquiry for studying multimodality across cultures, situations, artefacts and timescales.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101122047
Start date: 01-05-2024
End date: 30-04-2029
Total budget - Public funding: 1 999 974,00 Euro - 1 999 974,00 Euro
Cordis data

Original description

This project lays a foundation for conducting data-driven empirical research on multimodality, that is, how human communication and interaction rely on combinations of 'modes' of expression. Theories of multimodality are rapidly gaining currency in diverse fields concerned with human communication, interaction and cultural production. However, most theories of multimodality are based on conjecture and remain without an empirical foundation due to the lack of large, richly-annotated multimodal corpora and methods for their analysis.

FOUNDATIONS solves this problem by developing a novel methodology for conducting empirical research on multimodality in everyday cultural artefacts, such as newspapers, textbooks, magazines and social media videos. This methodology allows critically examining key theoretical concepts in the field – medium, semiotic mode and genre – and their joint contribution to meaning-making, which renews our understanding of key theoretical concepts in multimodality research and places their definitions on a solid empirical foundation.

To do so, FOUNDATIONS creates large and reproducible multimodal corpora using microtask crowdsourcing, which breaks complex tasks into piecemeal work and distributes this effort to non-expert workers on online platforms. These crowdsourced descriptions are combined with computational representations into graphs that represent the structure of multimodal discourse. For analysing these corpora, the project develops novel methods based on neuro-symbolic artificial intelligence, which enables combining crowdsourced human insights with the pattern recognition capability of neural networks.

The groundbreaking theoretical and methodological advances in FOUNDATIONS go far beyond state of the art by enabling large-scale empirical research while preserving analytical depth needed for multimodality research. This opens up new domains of inquiry for studying multimodality across cultures, situations, artefacts and timescales.

Status

SIGNED

Call topic

ERC-2023-COG

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2023-COG ERC CONSOLIDATOR GRANTS
HORIZON.1.1.1 Frontier science
ERC-2023-COG ERC CONSOLIDATOR GRANTS