MUSICAL-MOODS | A mood-indexed database of scores, lyrics, musical excerpts, vector-based 3D animations, and dance video recordings

Summary
The project aims at the development of an online database of scores, lyrics and musical excerpts, vector-based 3D animations, and dance video recordings, indexed by mood. Such a taxonomy of relations between the musical, linguistic and motion domains will be aimed at interactive music systems and music making. For realising the database, digital scores inclusive of lyrics will be gathered from collections of music in the public domain. Music mood classification using audio and metadata will aim at capturing sophisticated features but using no explicit domain-specific knowledge about a mental state. Datasets will be realised through a cross-modal approach. Validation of the model will be carried out by combining results from an online game-with-a-purpose, for Internet users, and from intermedia case studies for selected dancers. In further case studies, music works will be realised, also by invited artists, for the evaluation of the database in interactive music making. An online call for artists to use the database in music making or sound generation will be aimed at extending the evaluation further. The final database will be made available online for further exploitation. The present research will generate new knowledge for use in next-generation systems of interactive music and music emotion recognition, also contributing to extend the investigation in the broader areas of music making, computational creativity and information retrieval.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/659434
Start date: 01-12-2015
End date: 02-04-2019
Total budget - Public funding: 244 269,00 Euro - 244 269,00 Euro
Cordis data

Original description

The project aims at the development of an online database of scores, lyrics and musical excerpts, vector-based 3D animations, and dance video recordings, indexed by mood. Such a taxonomy of relations between the musical, linguistic and motion domains will be aimed at interactive music systems and music making. For realising the database, digital scores inclusive of lyrics will be gathered from collections of music in the public domain. Music mood classification using audio and metadata will aim at capturing sophisticated features but using no explicit domain-specific knowledge about a mental state. Datasets will be realised through a cross-modal approach. Validation of the model will be carried out by combining results from an online game-with-a-purpose, for Internet users, and from intermedia case studies for selected dancers. In further case studies, music works will be realised, also by invited artists, for the evaluation of the database in interactive music making. An online call for artists to use the database in music making or sound generation will be aimed at extending the evaluation further. The final database will be made available online for further exploitation. The present research will generate new knowledge for use in next-generation systems of interactive music and music emotion recognition, also contributing to extend the investigation in the broader areas of music making, computational creativity and information retrieval.

Status

CLOSED

Call topic

MSCA-IF-2014-GF

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2014
MSCA-IF-2014-GF Marie Skłodowska-Curie Individual Fellowships (IF-GF)