Summary
About 30% of children with autism are minimally verbal (MV), meaning they can communicate mainly through nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content). Vocalizations often have self-consistent phonetic content and vary in tone, pitch, and duration depending on the individual's emotional state or intended communication. While vocalizations contain important affective and communicative information and are comprehensible by close caregivers, they are often poorly understood by those who do not know the communicator well. An improved understanding of nonverbal vocalizations could pave the way for a better understanding of the cognitive, social, and emotional mechanisms associated with MV children with autism. Moreover, it could lead to new therapeutic interventions for these subjects based on advanced voice-based technology for Augmented Alternative Communication (AAC), i.e., for communicating without using words.
This MSCA touches on the research fields of audio signal processing (in particular, children's vocalizations perception) and human-computer interaction (in particular, voice-based interfaces for speech therapy). It aims to advance the understanding of MV autistic children's vocalizations and exploit the obtained knowledge to create advanced voice-based interfaces to enhance their therapeutic interventions. During the outgoing phase at MIT, the work will be about identifying and implementing machine learning algorithms for classifying children's vocalizations. The core strategy is to leverage the unique knowledge provided by caregivers who have long-term acquaintance with MV children with autism and can recognize the meaning of their vocalizations. In the return phase at POLIMI, the project will be about designing, developing, and empirically validating a voice-based AAC prototype for children's speech therapy through a participatory-design process involving end users, their caregivers, and autism experts.
This MSCA touches on the research fields of audio signal processing (in particular, children's vocalizations perception) and human-computer interaction (in particular, voice-based interfaces for speech therapy). It aims to advance the understanding of MV autistic children's vocalizations and exploit the obtained knowledge to create advanced voice-based interfaces to enhance their therapeutic interventions. During the outgoing phase at MIT, the work will be about identifying and implementing machine learning algorithms for classifying children's vocalizations. The core strategy is to leverage the unique knowledge provided by caregivers who have long-term acquaintance with MV children with autism and can recognize the meaning of their vocalizations. In the return phase at POLIMI, the project will be about designing, developing, and empirically validating a voice-based AAC prototype for children's speech therapy through a participatory-design process involving end users, their caregivers, and autism experts.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101109600 |
Start date: | 01-09-2024 |
End date: | 31-08-2026 |
Total budget - Public funding: | - 175 737,00 Euro |
Cordis data
Original description
About 30% of children with autism are minimally verbal (MV), meaning they can communicate mainly through nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content). Vocalizations often have self-consistent phonetic content and vary in tone, pitch, and duration depending on the individual's emotional state or intended communication. While vocalizations contain important affective and communicative information and are comprehensible by close caregivers, they are often poorly understood by those who do not know the communicator well. An improved understanding of nonverbal vocalizations could pave the way for a better understanding of the cognitive, social, and emotional mechanisms associated with MV children with autism. Moreover, it could lead to new therapeutic interventions for these subjects based on advanced voice-based technology for Augmented Alternative Communication (AAC), i.e., for communicating without using words.This MSCA touches on the research fields of audio signal processing (in particular, children's vocalizations perception) and human-computer interaction (in particular, voice-based interfaces for speech therapy). It aims to advance the understanding of MV autistic children's vocalizations and exploit the obtained knowledge to create advanced voice-based interfaces to enhance their therapeutic interventions. During the outgoing phase at MIT, the work will be about identifying and implementing machine learning algorithms for classifying children's vocalizations. The core strategy is to leverage the unique knowledge provided by caregivers who have long-term acquaintance with MV children with autism and can recognize the meaning of their vocalizations. In the return phase at POLIMI, the project will be about designing, developing, and empirically validating a voice-based AAC prototype for children's speech therapy through a participatory-design process involving end users, their caregivers, and autism experts.
Status
SIGNEDCall topic
HORIZON-MSCA-2022-PF-01-01Update Date
31-07-2023
Images
No images available.
Geographical location(s)