Summary
The “Smart City” paradigm aims to support new forms of monitoring and managing of resources as well as to provide situational awareness in decision-making fulfilling the objective of servicing the citizen, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. Considering the city as a complex and dynamic system involving different interconnected spatial, social, economic, and physical processes subject to temporal changes and continually modified by human actions. Big Data, fog, and edge computing technologies have significant potential in various scenarios considering each city individual tactical strategy. However, one critical aspect is to encapsulate the complexity of a city and support accurate, cross-scale and in-time predictions based on the ubiquitous spatio-temporal data of high-volume, high-velocity and of high-variety.
To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.
To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/957337 |
Start date: | 01-01-2021 |
End date: | 31-12-2023 |
Total budget - Public funding: | 5 998 086,00 Euro - 5 998 086,00 Euro |
Cordis data
Original description
The “Smart City” paradigm aims to support new forms of monitoring and managing of resources as well as to provide situational awareness in decision-making fulfilling the objective of servicing the citizen, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. Considering the city as a complex and dynamic system involving different interconnected spatial, social, economic, and physical processes subject to temporal changes and continually modified by human actions. Big Data, fog, and edge computing technologies have significant potential in various scenarios considering each city individual tactical strategy. However, one critical aspect is to encapsulate the complexity of a city and support accurate, cross-scale and in-time predictions based on the ubiquitous spatio-temporal data of high-volume, high-velocity and of high-variety.To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.
Status
SIGNEDCall topic
ICT-51-2020Update Date
26-10-2022
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all